Sie sind auf Seite 1von 157

UMT/OAM/APP/024291

Alcatel-Lucent Wireless Management


System
WMS Product Engineering Guide
OAM 6.0

01.09/ EN Standard

February 2010

Alcatel-Lucent Wireless Management


System
WMS PRODUCT ENGINEERING GUIDE
Document number: UMT/OAM/APP/024291
Document issue: 01.09/ EN
Document status: Standard
Product Release: OAM 6.0
Date:
February 2010

2009-2010 Alcatel-Lucent
All rights reserved.
UNCONTROLLED COPY: The master of this document is stored on an electronic database and is write
protected; it may be altered only by authorized persons. While copies may be printed, it is not recommended.
Viewing of the master electronically ensures access to the current issue. Any hardcopies taken must be
regarded as uncontrolled copies.
ALCATEL-LUCENT CONFIDENTIAL: The information contained in this document is the property of Alcatel-Lucent.
Except as expressly authorized in writing by Alcatel-Lucent, the holder shall keep all information contained
herein confidential, shall disclose the information only to its employees with a need to know, and shall protect
the information from disclosure and dissemination to third parties. Except as expressly authorized in writing
by Alcatel-Lucent, the holder is granted no rights to use the information contained herein. If you have
received this document in error, please notify the sender and destroy it immediately.

Alcatel-Lucent

Publication history

PUBLICATION HISTORY
March 2008
Issue 01.00 / EN, Draft
- Document creation

March 2008
Issue 01.01 / EN, Draft
- Update after review
July 2008
Issue 01.02 / EN, Preliminary
Engineering information with regards to the following features:
-

34266 RNC Counters List Management

33467 - OAM support of 15 minute granularity on NodeB counters

34043 NPO Kernel, Hardware and system administration functions in OAM06

15694 Capability to integrate with a Storage Area Network

33882 FM support of 7670 ESE/RSP

33835 Extended Interface for integration into MS-Portal

24350 WMS East-West Interfaces

September 2008
Issue 01.03 / EN, Preliminary
-

Update description in ST6140 and System Controller sections,

SSL added in security section,

Key Performance indicators (KPI) added in NPO section,

Support of 939X 1 Node B Models

Update Back & Restore section of NPO

September 2008
Issue 01.04 / EN, Preliminary
-

New section added with regards to System Management

New section added describing the WMS server failure scenarios and consequences.

: The list of 939x Node B models is: 9391, 9392, 9393 and 9396.
UMT/OAM/APP/024291
01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

Publication history

ii

December 2008
Issue 01.05 / EN, Preliminary
-

Capacity figure restriction added (NPO cluster configuration) with 15mn counters feature
activated

NPO Backup and Restore section updated

New Optical Switch Brocade 300 replacing the FC Switch Brocade 200

March 2009
Issue 01.06 / EN, Standard
-

WQA Engineering Notes Updated

Alarm Correlation rules updated

April 2009
Issue 01.07 / EN, Standard
-

Bandwidth and throughput information update

PC Client X Display recommendation update

August 2009
Issue 01.08 / EN, Standard
-

NPO Engineering information update with regards to the following features:


o

33376 Introduction of Netra240/SE3120 Successor for NPO (T5220)

84187 M4000 Introduction for 3G NPO

Windows OS update for WMS and WQA PC client.

February 2010
Issue 01.09 / EN, Standard
-

Introduction of M5000 for WMS (SF E4900 Successor)

Introduction of T5220 for WMS (Netra240/SE3120 Successor)

Introduction of NETRA T5440 for WMS (SF V890 Successor)

MS PORTAL section introduced (dimensioning model and capacity consideration)

New Console Servers introduction MRV LX-4016T

Volumetric and Minimum throughput information added in Annexe section

Additional note on hard disk expansion in WQA

Engineering requirements for NPO on M4000 in cluster mod (IP requirements)

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

Publication history

iii

Update on PC RAM requirements for WIPS usage to 4GB (in case of X-large Network). Client
simultaneous usage table updated accordingly

Additional information in backup and restore section with table describing tape drive and
server-domain compatibility matrix

Update on M4000 CPU characteristics (2.5 GHz instead of 2.4Ghz)

Update on HMI server with new HP PROLIANT DL320 G6 and the support of Window server
2003 SP2 (instead of SP1)

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

iv

TABLE OF CONTENTS
1.

2.

ABOUT THIS DOCUMENT ....................................................................................... 12


1.1.

AUDIENCE FOR THIS DOCUMENT .....................................................................12

1.2.

NOMENCLATURE ...........................................................................................12

1.3.

SCOPE .........................................................................................................12

1.4.

REFERENCES................................................................................................13

OVERVIEW.................................................................................................................. 14
2.1.

3.

4.

NETWORK MANAGEMENT FUNCTIONALITY .......................................................14

WMS MAIN SERVER ENGINEERING CONSIDERATIONS.............................. 16


3.1.

NSP OVERVIEW ...........................................................................................16

3.2.

FAULT MANAGEMENT APPLICATIONS ..............................................................18

3.3.

FAULT AND CONFIGURATION MANAGEMENT.....................................................19

3.4.

SRS FUNCTIONALITY ....................................................................................21

3.5.

PERFORMANCE MANAGEMENT APPLICATION ...................................................22

3.6.

CAPACITY .....................................................................................................23

3.7.

BACKUP AND RESTORE..................................................................................40

3.8.

INTEGRATION OF WMS TO A STORAGE AREA NETWORK (SAN) ........................44

WMS EXTERNAL INTERFACE ENGINEERING CONSIDERATIONS ........... 46


4.1.

OVERVIEW....................................................................................................46

4.2.

THE ALCATEL-LUCENT SECURITY BUILDING BLOCK .........................................46

4.3.

THE 3GPP NOTIFICATION BUILDING BLOCK ....................................................47

4.4.

3GPP FAULT MANAGEMENT BUILDING BLOCK (3GPP FM BB).......................47

4.5.

3GPP BASIC CM BUILDING BLOCK (3GPP BASICCM BB)...............................48

4.6.

3GPP BULKCM BUILDING BLOCK (3GPP BULK CM BB) .................................48

4.7.

3GPP PM BUILDING BLOCK (3GPP PM BB)..................................................49

4.8.

3GPP BUILDING BLOCK DEPLOYMENT............................................................50

4.9.

3GPP EXTERNAL INTERFACE CAPACITY AND PERFORMANCE ............................50

4.10.

3GPP PM BB EXTERNAL INTERFACE.......................................................52

4.11.

OSS REMOTE LAUNCH OF WMS GUI ......................................................52

4.12.

INTEGRATION OF WMS TO MS-PORTAL ...................................................52

4.13.

WMS EAST-WEST INTERFACE ................................................................52

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

5.
WMS CLIENTS AND SERVER OF CLIENTS ENGINEERING
CONSIDERATIONS ............................................................................................................. 54

6.

7.

5.1.

WMS CLIENT CAPACITY ................................................................................54

5.2.

WMS CLIENT USER PROFILE ..........................................................................55

5.3.

CLIENT ENGINEERING CONSIDERATIONS ..........................................................55

5.4.

WMS SERVER OF CLIENTS ENGINEERING CONSIDERATIONS .............................56

HARDWARE SPECIFICATIONS ............................................................................. 57


6.1.

OVERVIEW....................................................................................................57

6.2.

SERVER HARDWARE SPECIFICATIONS.............................................................57

6.3.

CLIENTS HARDWARE SPECIFICATIONS ............................................................63

6.4.

WQA SERVER HARDWARE SPECIFICATIONS ...................................................67

6.5.

GENERAL AVAILABILITY OF SUN SERVERS .......................................................68

6.6.

DCN HARDWARE SPECIFICATIONS .................................................................75

6.7.

OTHER EQUIPMENT .......................................................................................78

NETWORK ARCHITECTURE.................................................................................. 79
7.1.

DEFINITION - NOC / ROC ARCHITECTURE ......................................................79

7.2.

REFERENCE ARCHITECTURE.................................................................79

7.3.

FIREWALL IMPLEMENTATION ...........................................................................80

7.4.

NETWORK INTERFACES ON WMS SERVER ......................................................81

7.5.

WMS SERVER IP ADDRESS REQUIREMENTS...................................................89

7.6.

NETWORK INTERFACES ON NPO / MS PORTAL SERVER ...................................89

7.7.

NETWORK INTERFACES ON WQA SERVER ......................................................97

7.8.

NETWORK INTERFACES ON CLIENTS ...............................................................97

7.9.

NETWORK INTERFACES FOR REMOTE ACCESS ................................................97

7.10.
8.

9.

OTHER NETWORKING CONSIDERATIONS ...................................................98

BANDWIDTH REQUIREMENTS ........................................................................... 101


8.1.

BANDWIDTH CONSIDERATIONS WITHIN THE ROC ..........................................101

8.2.

BANDWIDTH REQUIREMENTS BETWEEN THE ROC AND THE NES ....................101

8.3.

BANDWIDTH REQUIREMENTS BETWEEN THE ROC AND THE CLIENTS ...............105

8.4.

BANDWIDTH REQUIREMENTS BETWEEN THE ROC AND EXTERNAL OSS ..........106

SECURITY AND REMOTE ACCESS..................................................................... 107


9.1.

OPERATING SYSTEM HARDENING .................................................................107

9.2.

AUDIT TRAIL ...............................................................................................107

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

vi

9.3.

WMS ACCESS CONTROL .............................................................................108

9.4.

NE ACCESS CONTROL LISTS (ACL) .............................................................108

9.5.

USER SECURITY ADMINISTRATION ................................................................108

9.6.

MULTISERVICE DATA MANAGER (IP DISCOVERY)...........................................109

9.7.

CENTRALIZED OAM USER MANAGEMENT .....................................................109

9.8.

USER SESSION MANAGEMENT .....................................................................109

9.9.

SSL...........................................................................................................109

9.10.

RADIUS/IPSEC ....................................................................................110

9.11.

SOLARIS SECURE SHELL (SSH) ............................................................110

9.12.

SNMP.................................................................................................112

9.13.

IP FILTERING........................................................................................112

9.14.

FIREWALL ............................................................................................112

9.15.

SECURITY FEATURES ON THE WMS DCN EQUIPMENT .............................113

10.

NETWORK TIME SYNCHRONISATION............................................................. 114

10.1.

ABOUT NTP FUNCTIONALITY..................................................................114

10.2.

COMPATIBILITIES ..................................................................................114

10.3.

TIME SOURCE SELECTIONS ...................................................................115

10.4.

REDUNDANCY AND RESILIENCY..............................................................115

10.5.

DEFAULT BEHAVIOUR OF WMS MAIN SERVER UNDER OUTAGE CONDITIONS115

10.6.

RECOMMENDED NTP ARCHITECTURE ....................................................115

10.7.

USING PUBLIC TIME SOURCES OVER THE INTERNET ...............................116

10.8.

NTP ACCURACY AND NETWORK DESIGN REQUIREMENTS..........................116

10.9.

NTP RESOURCE USAGE CONSIDERATIONS.............................................117

11.

9359 NPO NETWORK PERFORMANCE OPTIMISER ...................................... 119

11.1.

OVERVIEW ...........................................................................................119

11.2.

HIGH LEVEL OVERVIEW AND SOLUTION ARCHITECTURE .............................119

11.3.

CAPACITY CONSIDERATIONS..................................................................120

11.4.

NPO CLUSTER .....................................................................................120

11.5.

QOS MANAGEMENT...............................................................................123

11.6.

TOPOLOGY GRANULARITIES...................................................................124

11.7.

NPO PURGE FUNCTIONALITY .................................................................124

11.8.

CONSIDERATION FOR BACKUP AND RESTORE...........................................124

11.9.

NPO CLIENT REQUIREMENTS ................................................................128

11.10.

ROUTING SWITCH AND BANDWIDTH REQUIREMENTS ...........................128

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

vii

11.11.

EXTERNAL INTERFACE CONSIDERATION ..................................................129

11.12.

NPO PERFORMANCE CONSIDERATIONS ..................................................129

12.

MS PORTAL............................................................................................................... 131

12.1.

OVERVIEW ............................................................................................131

12.2.

DIMENSIONING MODEL ...........................................................................132

12.3.

CAPACITY CONSIDERATIONS..................................................................133

12.4.

BANDWIDTH AND CONNECTIVITY REQUIREMENTS .....................................134

12.5.

HMI SERVER CONFIGURATION ..............................................................134

12.6.

OTHER CONSIDERATIONS ......................................................................136

13.

W-CDMA QUALITY ANALYSER (WQA) ............................................................ 137

13.1.

HIGH LEVEL OVERVIEW AND SOLUTION ARCHITECTURE .............................137

13.2.

WQA CLIENT SPECIFICATIONS ...............................................................138

13.3.

CONSIDERATION FOR BACKUP AND RESTORE...........................................138

13.4.

CAPACITY CONSIDERATIONS ..................................................................138

14.

RADIO FREQUENCY OPTIMISER (RFO)........................................................... 140

14.1.

OVERVIEW ...........................................................................................140

14.2.

RFO SOLUTION PROCEDURE ................................................................140

14.3.

HARDWARE REQUIREMENTS ..................................................................140

15.

5620 NM....................................................................................................................... 141

15.1.

5620 NM DATABASE NETWORKSTATION ................................................141

15.2.

5620 OPERATOR SERVER NETWORKSTATION ........................................141

15.3.

CPSS ROUTER NETWORKSTATION .......................................................142

15.4.

STATISTICS COLLECTOR NETWORKSTATION ...........................................142

15.5.

5620 CMIP/CORBA SERVER ...............................................................142

15.6.

OPERATOR POSITION ............................................................................142

15.7.

HARDWARE REQUIREMENTS ..................................................................143

15.8.

BACKUP & RESTORE .............................................................................144

15.9.

NETWORK INTERFACE/IP ADDRESSING ...................................................145

15.10.

BANDWIDTH REQUIREMENTS .................................................................147

15.11.

7670 NODE TYPES ...............................................................................148

15.12.

7670 INTEGRATION TO WMS ................................................................148

16.

ANNEXES ................................................................................................................... 150

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

viii

16.1.

OBSERVATION FILES .............................................................................150

16.2.

NE SOFTWARE .....................................................................................152

17.

ABBREVIATIONS ..................................................................................................... 154

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

ix

LIST OF FIGURES
Figure 1 : NSP Overview ................................................................................................................................. 17
Figure 2 : Fault Management Architecture ................................................................................................. 20
Figure 3 : Configuration Management Architecture................................................................................. 20
Figure 4: Performance Management Architecture ................................................................................... 22
Figure 5: Dual Main Server configuration.................................................................................................. 26
Figure 6 : SMG architecture ............................................................................................................................ 28
Figure 7: Alarm Correlation Functional Diagram...................................................................................... 32
Figure 8: Storage Area Network Architecture............................................................................................ 44
Figure 9: 3GPP FM High Level architecture ............................................................................................... 47
Figure 10 : Basic CM/Kernel CM High Level architecture....................................................................... 48
Figure 11 : Bulk CM High Level architecture ............................................................................................. 49
Figure 12 : PM High Level architecture ....................................................................................................... 49
Figure 13 : 3GPP Output Building Block Deployment within a ROC ................................................... 50
Figure 14 : WMS East-West Interface .............................................................................................................. 53
Figure 15 : RAMSES Solution Architectural Diagram ................................................................................... 76
Figure 16 : Reference Architecture ................................................................................................................... 80
Figure 17 : Example of E4900 with System controller and ST6140 connectivity ............................. 87
Figure 18 : M5000 with System controller and ST2540 connectivity ................................................... 88
Figure 19 : Example of NETRA T5440 with System controller connectivity ...................................... 88
Figure 20 : M4000 with System controller and ST2540 connectivity ................................................... 91
Figure 21 : M4000 with System controller and ST2540 connectivity ................................................... 93
Figure 22 : Magnified View of M4000-4 CPU Interface connectivity in Cluster Mode...................... 94
Figure 23 : Subnet Groups in a NPO Cluster ................................................................................................... 95
Figure 24 : NPO Cluster Fibre Channel Switch Connectivity ........................................................................ 96
Figure 25 : NPO Cluster Fibre Channel Switch Redundancy......................................................................... 97
Figure 26 : Terminal Server Connections......................................................................................................... 98
Figure 27 : Recommended Time Synchronization Architecture................................................................... 116
Figure 28 : NPO Architecture.......................................................................................................................... 119
Figure 29 : NPO Cluster Architecture ............................................................................................................ 122
Figure 30 : NPO Backup and restore overview ....................................................................................... 124
Figure 31 : Centralized Backup & Restore architecture........................................................................ 125
Figure 32 : MS-PORTAL architecture......................................................................................................... 131
Figure 33 : WQA Architecture ........................................................................................................................ 137
Figure 34 : WQA Backup & Restore............................................................................................................... 138
Figure 35 : NetworkStations in a 5620 network ............................................................................................. 143
Figure 36 : 7670 Network Management from WMS...................................................................................... 149

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

LIST OF TABLES
Table 1: WMS Nominal Main Server Capacity............................................................................................ 24
Table 2: WMS legacy Main Server Capacity............................................................................................... 24
Table 3 : WMS failure scenarios and consequences............................................................................... 29
Table 4 : Maximum recommended threshold alarms per server type ................................................. 30
Table 5 : Simultaneous software downloads to Access NE (Nominal machines) ........................... 33
Table 6: Simultaneous software downloads to Access NE (legacy machines)................................ 34
Table 7: Typical software size per Access NE .......................................................................................... 34
Table 8: Supported GPM data granularities............................................................................................... 34
Table 9: Call Trace Type Definitions ............................................................................................................ 37
Table 10: Call Trace Engineering guidelines and daily recommended volumes of data ............... 38
Table 11: Maximum number of standing alarms per hardware type ................................................... 39
Table 12: Tape drive and Domain/Server matrix compatibility ............................................................. 43
Table 13: 3GPP FMBB Specifications.......................................................................................................... 51
Table 14 : 3GPP CM BB Specifications ....................................................................................................... 51
Table 15 : Number of concurrent clients per Main Server type ............................................................ 54
Table 16 : Number of Registered users per ROC ..................................................................................... 54
Table 17 : Sun N240 Hardware Requirements........................................................................................... 58
Table 18 : SUN SPARC ENTERPRISE T5220 Hardware Requirements............................................... 58
Table 19 : Sun V890 Hardware Requirements ........................................................................................... 59
Table 20 : SUN NETRA T5440 Hardware Requirements.......................................................................... 60
Table 21 : SF E4900 Hardware Requirements ........................................................................................... 60
Table 22 : Sun StorEdge 6140 Hardware Requirements......................................................................... 60
Table 23 : SUN ENTERPRISE M5000 Hardware Requirements ............................................................. 61
Table 24 : SF V490 Hardware Requirements.............................................................................................. 62
Table 25 : SUN SPARC ENTERPRISE M4000............................................................................................. 62
Table 26 : Sun Ultra 45 Hardware Requirements...................................................................................... 63
Table 27 : Windows PC Hardware Requirements for WMS.................................................................... 64
Table 28 : Windows PC Hardware Requirements for MS-PORTAL...................................................... 65
Table 29: RAM requirements for client simultaneous usage ................................................................. 67
Table 30 : WQA Hardware Specifications........................................................................................................ 67
Table 31 : VPN Firewall Brick Platform .......................................................................................................... 76
Table 32 : VPN Router Platform ....................................................................................................................... 76
Table 33 : Terminal Server Console Specifications ......................................................................................... 78
Table 34 : Lexmark Printer Hardware Requirements............................................................................... 78
Table 35 : Interface Configuration - Configuration A..................................................................................... 81
Table 36 : Interface Configuration - Configuration B ..................................................................................... 82
Table 37 : Interface Configuration - Configuration C..................................................................................... 82
Table 38 : Interface Configuration - Configuration D..................................................................................... 82
Table 39 : Interface Configuration - Configuration E ..................................................................................... 82
Table 40 : Supported Interface Configurations per server type (Nominal)................................................... 82
Table 41 : Supported Interface Configurations per server type (legacy)....................................................... 83
Table 42 : Interface Naming Convention.......................................................................................................... 83
Table 43 : WMS IP Requirements Summary .................................................................................................. 89
Table 44 : Interface configuration on NPO or MS-Portal............................................................................... 90
Table 45 : T5220/T5440 NPO / MS PORTAL IP Requirements Summary.................................................. 90
Table 46 : Interface configuration on NPO or MS-Portal............................................................................... 91
Table 47 : M4000-2CPU NPO / MS PORTAL IP Requirements Summary................................................. 92
Table 48 : Interface configuration on NPO or MS-Portal............................................................................... 92
Table 49 : Subnet and IP Addressing configuration on NPO or MS-Portal.................................................. 92
Table 50 : M4000-4CPU NPO / MS PORTAL IP Requirements Summary................................................. 94
Table 51 : Protocols used on southbound Interfaces........................................................................................ 98
Table 52 : Bandwidth Requirements for RNC Call Trace (maximum value) .................................... 103
Table 53 : Bandwidth Requirements for RNC CN Observation counters......................................... 103
Table 54 : Maximum number of simultaneous software downloads ................................................. 104
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

xi

Table 55 : SysLog message characteristics............................................................................................. 105


Table 56 : WMS Main Server to OSS Bandwidth Requirements ......................................................... 106
Table 57 : NEs supporting RADIUS/IPSec............................................................................................... 110
Table 58 : NPO Server Packaging ................................................................................................................... 120
Table 59 : Local Backup & restore OSB Scope of usage .................................................................. 126
Table 60 : NPO Server Tape Drive Throughput ............................................................................................ 126
Table 61: Centralized Backup & Restore Scope of usage.................................................................. 127
Table 62 : NPO key performance indicators .................................................................................................. 130
Table 63 : MS-SUP Server Capacity ............................................................................................................... 133
Table 64 : MS MS-NPO Server Capacity ....................................................................................................... 134
Table 65 : MS-PORTAL HMI Server Hardware Configuration and Capacity .......................................... 135
Table 66 : CPSS Addressing............................................................................................................................. 146

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

1.

12

ABOUT THIS DOCUMENT

This document details the engineering rules for the WMS Main Server, OAM Server
hardware/software requirements, OAM DCN recommendations, backup and restore, remote access
and other OAM engineering information for WMS.

1.1.

AUDIENCE FOR THIS DOCUMENT

This WMS Engineering guide has been specifically prepared for the following audience:
-

Network Engineers
Installation Engineers
Network & System Administrators
Network Architects

1.2.

NOMENCLATURE

<>
<Engineering rule>: The OAM rules (non negotiable) are typically OAM capacity values, IP parameters
addressing (Sub Net, range, etc).
<System Restrictions>: A system restriction can be a feature that is not applicable to an OAM
Hardware model.
<Engineering recommendations> : Mainly recommendations related to performance (QoS, Capacity,
KPI) to get the best of the network
<Engineering note>: Can be an option suggestion, or a configuration note that cab be operator
dependant.

1.3.

SCOPE

This Engineering Guide is for Alcatel-Lucent WMS for release OAM06.

In scope of this Engineering Guide:


-

Alcatel-Lucent 9353 WMS (W-CDMA Management System, previously WMS)

Alcatel-Lucent 9359 NPO (Network Performance Optimizer)

Alcatel-Lucent MS-PORTAL including the 9959 MS-NPO (Multi Standard - Network Performance
Optimizer) and/or the 9953 MSSUP (Multi Standard - Supervision Portal)

Alcatel-Lucent 9352 WQA (W-CDMA Quality Analyzer)

Alcatel-Lucent 9351 WPS (W-CDMA Provisioning System)

Alcatel-Lucent RFO (Radio frequency Optimizer)

Alcatel-Lucent 5620 Network Manager (NM)

Not in the scope of the WMS Engineering Guide:


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

13

other OAM platforms not part of WMS

Throughout the document, these different management components will be referred to by their main
names such as WMS, WPS, etc.

1.4.

REFERENCES

All references about Alcatel-Lucents WMS can be found in the following Alcatel-Lucent Technical
Publications.
-

Alcatel-Lucent 9300 W-CDMA Product Family - Document Collection Overview and Update
Summary (NN-20500-050).

Additional updates and corrections can be found in the OAM Release Notes for the particular release.
For further information on how to obtain these documents, please contact your local Alcatel-Lucent
representative.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

2.

14

OVERVIEW

Alcatel-Lucent Wireless Management System (WMS) delivers an integrated OAM management


platform, across the radio access, IP/ATM backbone and service enabling platform domains. WMS
plays an important role in providing the foundation Network Management capabilities to the AlcatelLucent solution for complete, end-to-end management of the Wireless Network.

2.1.

NETWORK MANAGEMENT FUNCTIONALITY

WMS is focused on efficiently delivering the foundation on which to deploy and maintain the Wireless
Internet network resources, deliver services, and account for network and service use by subscribers.
The key functions of the network management layer are described below.

2.1.1 NETWORK MANAGEMENT PLATFORM


Network Services Platform (NSP) is the underlying platform or operating environment that enables
management of the network resources and of the services being delivered to customers. The platform
provides a single, integrated view of the entire Alcatel-Lucent wireless network across radio access
and the service enabling platforms as well as a launch pad for all pre-integrated internal and/or
underlying systems and tools.

2.1.1.1

FAULT MANAGEMENT

NSP Fault management tools provide an integrated set of fault surveillance, diagnosis and resolution
tools that span the domain radio access as well as the service enabling platforms and the IP/ATM
backbone, to give the operator a single alarm view across the entire network. These tools enable the
operator to identify and resolve network or service affecting issues quickly and efficiently.
WMS Fault Management functionality for wireless network includes: Alarm Management (real-time
alarm surveillance, delivered as an integral part of the NSP), Historical Fault Management (Historical
Fault Browser) and the Trouble Ticketing Interface.
Also included in WMS Fault Management functionality is the ability to perform alarm filtering,
specifically the support of alarm delay and alarm inhibit capabilities on the alarm stream. As well, the
ability to modify the alarm severity attribute of the alarm stream allows operators the ability to optimize
their alarm handling capabilities.

2.1.1.2

PERFORMANCE MANAGEMENT

WMS Performance Management functionality for Wireless networks includes as a base Performance
Monitoring (near real-time) and a collection/mediation and conversion to 3GPP compliant XML format
for use with any 3rd Party Performance Management tools. From OAM06, the Performance Server
functionality (which was previously on a separate Sun server), will co-reside on the WMS Main server.
The WMS performance management tools are designed for viewing and optimizing network element
and service performance across Alcatel-Lucent radio access (UMTS). Performance Management
helps service providers to pinpoint and resolve potential network performance issues before they
become a problem to their end customers.
For Performance Reporting (historical), a powerful tool NPO (Network Performance Optimizer) is
offered as an option.
To optimize neighbouring cells, WQA (W-CDMA Quality Analyzer) tool is introduced based on
Neighbouring cell Call Traces as an option.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

15

Finally, to post-process Call Trace data (CTx), a powerful tool based on years of industry experience
called Radio Frequency Optimizer (RFO) is introduced in OAM06.

2.1.1.3

CONFIGURATION MANAGEMENT

An integrated set of capabilities designed to configure parameters of all network elements within the
Wireless network is provided as part of WMS. Configuration Management has two aspects: off-line
configuration tools that are designed to make efficient and effective the most time-consuming
configuration activities through pre-integrated assistants for standard configuration activities. On-line
configuration is performed via an integrated set of network element-focused configuration tools,
accessible directly from the management platform via a context-sensitive launch capability ensuring
network element configuration can be done quickly, easily and with minimal risk of errors. WMS
Configuration Management functionality includes: Off-line and Online Configuration for the radio
access network (UMTS), combined with On-line configuration reach-through across the entire network.

2.1.1.4

INTERFACE TO UPSTREAM MANAGER

WMS offers 3GPP OAM standards compliant interfaces to allow customers OSS to manage the
Alcatel-Lucent wireless networks. The 3GPP compliant ITF-N interfaces are based on the 3GPP
standards, and the solutions offered include support for the Alarm IRP, the BasicCM IRP, the BulkCM
IRP (UMTS Access), as well as support of XML transfer of 3G performance counters. The alarm IRP
allows fault OSSs to receive, through a 3GPP compliant interface, alarm information from the AlcatelLucent wireless networks.
The BasicCM IRP allows the OSS to discover network elements as well as attributes of the network
element. The BulkCM IRP allows the OSSs to bulk provision standards based attributes of the UTRAN
networks.
The support of the XML interface for performance allows performance OSSs to gather performance
statistics from the Alcatel-Lucent wireless networks using standards compliant mechanisms.

2.1.2 HARDWARE PLATFORM


WMS is delivered on a simple, scalable hardware platform strategy designed to grow effectively with
the rollout of Wireless services.
In OAM06, there is a new feature to optimize the network management hardware by co-residing the
previously separate Main and Performance servers on one Sun platform i.e. on the WMS Main Server.
This server is dedicated to providing Fault, Configuration, Performance, Security, User and Network
Element Access Management among other functionalities.
The client workstations supported for management of the Wireless network include Sun Workstations
and PCs. They host the WMS clients along with standalone configuration tool called WPS and Call
Trace analyzing tool called RFO.
Additionally, there are dedicated hardware platforms for optional applications such as NPO and WQA.

2.1.3 5620 NETWORK MANAGER


As an extension to the WMS solution and to replace the Passport 7k and Passport 15k with 7670 RSP
and 7670 ESE Network elements, the 5620 Network Manager is introduced that manages these new
Network Elements.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

3.

16

WMS MAIN SERVER ENGINEERING


CONSIDERATIONS

This chapter gives an architectural overview and describes the general engineering rules for the WMS
Main Server.
The Main server is the heart of the Network Management platform for managing the Alcatel-Lucent
Radio access Network.
From OAM06, the Main Server functionality is enhanced to provide Performance Management of the
UTRAN network in addition to Fault, Configuration, User Access and System Management, Software
Repository of the wireless network and 3GPP compliant Itf-N.
The different components of the Main server are as follows:

3.1.

NSP OVERVIEW

NSP (Network Services Platform) is an integrated telecommunications network management software


platform developed by Alcatel-Lucent that provides a single point of control for the operation,
administration, maintenance, and provisioning functions for telecommunications networks in a multidomain network management environment. NSP uses a scalable client-server infrastructure supported
directly by distributed CORBA application building blocks and CORBA device adapters.
NSP architecture is described as follows:
-

Device adapters collect real-time data from the network (either from the network elements
themselves or from the element management systems such as MDM, Access Module) and
translate network element data into a format that the NSP applications can process.
The collected data from the various device adapters is passed to distributed CORBA applications
(building blocks).
These building blocks (SUMBB, FMBB, TUMSBB) process the data and where necessary
summarize it.
This processed data is then provided to client applications. Java-based multi-platform enabled
GUI clients display the processed data.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

17

Fault GUI Plug


ins

User GUI Client

Detailed Alarm Info/


Alarm Ack and Clear

SUMBB
Alarm Count

FMBB

NE
Discovery Info

Other GUI Plug


ins

Detailed NE Info/
Topology Info

TUMSBB
NE Info

Alarm Info

DA

DA

DA

Element Management System

ASCII, SNMP,
CORBA,
CMIP

Network Elements
Figure 1 : NSP Overview

3.1.1 NSP COMPONENT OVERVIEW


This section gives a high-level definition of the different components of NSP.

3.1.1.1

NSP GRAPHICAL USER INTERFACE

The NSP GUI is a Java-based GUI with point-and-click navigation. It provides integrated real-time
fault management capabilities, the ability to view OSI node state information for data devices (where
supported) and a context-sensitive reach-through to underlying EMS and devices. The NSP GUI also
provides application launch, customer-configurable custom commands, nodal discovery of devices,
technology layer filtering (i.e. wireless, switching, IP transport), access controls, network partitioning,
and multiple independent views.
NSP provides the ability to launch other applications (i.e. element provisioning) directly from NSP
using Application Launch scripts, delivering a single point of access to multiple applications. NSP
enables easy, in-context reach-through to underlying Network Element interfaces or Element
Management Systems (EMS), via the drop-down menu accessible from each NEs icon.

3.1.1.2

FAULT MANAGEMENT BUILDING BLOCK (FMBB)

FMBB acts as the common point of contact to provide integrated alarm information for the entire
network. FMBB provides the following fault management interfaces:
-

Alarm Log Monitor interface to allow its clients to retrieve a current snapshot of alarms within the
system
The alarm manager interface allows clients to monitor alarms on an ongoing basis
Control interface to allow clients to acknowledge alarms and manually clear alarms

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

18

FMBB communicates with application clients via the Object Request Broker (ORB) to service requests
for alarm and event information. FMBB also communicates with Device Adaptors (DAs) via the ORB to
retrieve the required data requested by the applications clients.
FMBB is solely concerned with the current alarms/events for the network, that is, the alarm conditions
as they occur or only those alarm conditions which are still active on the network elements. Other
service assurance applications, such as the Historical Fault Browser (HFB), address the requirement
for alarm history. There is 1 FMBB per WMS Main Server.
For scalability, multiple instances of FMBB can be deployed. Typically, a network could be subdivided
into sub-domains. In such a deployment, one instance of FMBB would be responsible for the alarm
information from a single sub-domain.

3.1.1.3

TOPOLOGY UNIFIED MODELLING SERVICE BUILDING BLOCK


(TUMSBB)

The WMS Topology Unified Modelling Service (TUMS) is used for NE Discovery and Network Layer
Management.
Network Element Discovery is done using an interface between TUMS and the DA. When the DA
discovers new NEs it reports these to the TUMS. TUMS registers this information with the NSP GUI
(via SUMBB) and the NE is available to be added to a Network Layout.

3.1.1.4

SUMMARY SERVER BUILDING BLOCK (SUMBB)

The Summary Server (SUMBB) is involved with summarizing fault information passed to it via FMBB,
and NE information passed to it via TUMSBB. The NSP GUI then uses this information to report to the
user.
As well as summarizing alarm information, SUMBB is used to store and process all of the information
that identifies layouts, groups and NEs within NSP. This provides the means to partition NEs into
groups and layouts for different sets of users.

3.2.

FAULT MANAGEMENT APPLICATIONS

The following applications provide the operator with additional service assurance features to manage
their network:

3.2.1.1

HISTORICAL FAULT BROWSER (HFB)

Historical Fault Browser (HFB) provides a generic event history capability across WMS managed
network elements. It has a flexible query mechanism allowing users to aggregate selective alarm
history information. Specifically, HFB captures all alarm data for historical analysis, incident reporting,
and customer impact analysis. A Web-based graphical user interface (GUI) provides easy accessibility
to fault information for troubleshooting in Operations Centres or remote locations. The HFB allows the
user to perform the following tasks:
-

Filter an alarm list on any displayed field from the database


Display multiple queries at the same time, each in a separate window
Sort alarms by any column in the table
Display retrieved alarm data in hypertext mark-up language (HTML) report format
Store queried data to file
Print selective alarm event data from the database

HFB retrieves raise alarm and clear alarm events from the network via the WMS (Building Block)
architecture. Historical Fault Browser automatically supports newly added network elements without
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

19

any additional configuration required. Alarm events are stored in an Oracle Relational Database
Management System (RDBMS).
HFB Query Interface
In OAM06, a new feature called HFB query interface is introduced which generates advanced tabular
and graphical reports from the HFB and stores them in comma separated (csv) format plain text file.
Users can download the file from the primary Main Server to create specific reports using standard
tools like Excel.
To build a report, users need to issue one WICL command with appropriate parameters. At the very
least, the command shall envelop a SQL statement which is used to query out result record set and
the location of the file to be returned for the user to download then. Users specific WICL commands
after being reassembled are turned into one or more pure Oracle PL/SQL statements, and then they
are passed through WICL engine to the Shell/Tcl script. The script, then launches SQL/PLUS and
executes the SQL statements within a predefined procedure. Finally, the procedure will save the query
result to a csv formatted data file which was denoted by the argument in WICL command.

3.2.1.2

TROUBLE TICKETING (TT)

Trouble Ticketing Interface provides an interface between WMS software and trouble management
software systems. It gives network operators the ability to create trouble reports with complete fault
and originator information. Trouble tickets for alarm events raised within WMS are seamlessly
managed by a number of third-party trouble ticketing systems that support simple mail transfer
protocol (SMTP) interfaces, directly from WMS - for all network elements managed by WMS. Trouble
Ticketing Interface provides the ability to:
-

Retrieve all existing open trouble tickets and their relationship to the network alarm selected when
the ticket was created
Request the creation of a trouble ticket associating a unique network alarm as a related object to
the created ticket
Register and receive notification when a creation request has been completed

The Trouble Ticketing Interface accepts responses from the trouble management systems allowing bidirectional, inter-working communication between the two. This means that the trouble ticket identifier
assigned by the third-party trouble management system is tagged to the WMS alarm object and
displayed in the Alarm Manager. Alarms that are cleared in WMS will be forwarded to the trouble
ticketing system using the previously assigned identifier, allowing the alarm to be properly cleared in
the trouble ticketing system. This bi-directional capability thus resolves the time-consuming and errorprone process of manually synchronizing the two systems.
The Trouble Ticketing application provides inter-working with the following trouble management
systems:
- Clarify's Clear Support Trouble Management system
- Remedy's Action Request System

3.3.

FAULT AND CONFIGURATION MANAGEMENT

This section gives a high-level architecture overview of the fault and configuration management within
WMS.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

20

To NSP
GUI Client
Network Management
Layer and Applications
3GPP FMBB
Interface to OSS

Network Services Platform


(NSP)

To
OSS

DA Layer
Access DA

EMS Layer
Access Object
Model Manager

Multiservice
Data Manager (MDM)

Network Element
Layer

Access

Other
MSS
Devices

Figure 2 : Fault Management Architecture


Client

OSS

ACCESS
GUI

MDM
GUI

Stand Alone via


CLI or Web

3GPP Basic CM IRP


Manager

WPS

TUMS BB

3GPP CM BB

Access

ACCESS

Access

MDM

IP Backbone

ATM Backbone

Figure 3 : Configuration Management Architecture


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

21

The following sections describe the tools used within the components:

3.3.1 ACCESS-DA
The Access Object Model Manager portion of the Access Modules sends fault information to the
Access DA. The Access DA ensures the OAM facilities of the Access network by providing the
following functionalities for fault management:
-

Receive and store the notifications from the NEs in the access network
Convert the notifications into alarms
Transmit the alarms to the GUI

The Access DA also receives fault information from the RNC I-Node via MDM APIs.

3.3.2 ACCESS MODULE


The Access Object Model Manager and the Access Device Adapter are actually parts of what is called
the UMTS Access Module. The Access Object Model Manager is the element management system for
the NODE B & RNC C-Node
The UMTS Access Object Model Manager portion of the UMTS Access module directly manages
these devices over a proprietary messaging interface called SE/PE (over TCP/IP). It then sends the
fault information into the Access DA portion of the Access module. It has a shelf-level display of these
elements that it manages, which can be launched in-context from the NSP GUI.
The configuration for the Access devices is controlled and coordinated by the Access module in the
most part, except for the RNC IN/POC, which is configured through MDM. Note that the configuration
through MDM for the RNC-IN/POC is transparent to the user.
The whole RNC configuration is performed through the access module: CM XML Files for C-Node, ANode and I-Node are constructed in Alcatel-Lucent Wireless Provisioning System, then imported in the
Access Module that uses the MDM API to transmit CAS commands toward the Multiservice Switch.

3.3.3 MULTISERVICE DATA MANAGER


The Multiservice Data Manager (MDM) is Alcatel-Lucents element management system for managing
Multiservice Switch devices. Applicable portions of the MDM technology are integrated and co-resident
in WMS. MDMs primary role in WMS is to mediate fault and configuration information between
Multiservice Switch-based devices and higher layers of the OAM solution. MDM also mediates fault
information from SNMP devices deployed in UMTS.
MDMs real time performance monitoring functionality is integrated into WMS, and supports multiple
domains.
Applications and utilities of MDMs suite are also deployed and may be used if required.

3.4.

SRS FUNCTIONALITY

The Software Repository Server is used to store the software installed on the wireless network. This
server contains software in a format ready to be used by all the installation tools. The software is
obtained from web server, e-mail or CD-ROM.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

22

The WMS software tar files are available from: Alcatel Lucent web site (e-delivery), e-mail or CDs, in
compressed format and the 3rd party software allowed for the compression of these files are gzip
(extension.gz), compress (extension.Z) or zip (extension.zip).
There is only one SRS per ROC located on the WMS Main Server. This SRS can be shared by
several ROCs. The SRS functionality on the WMS Main Server covers WMS load patches and Access
NE software loads. The SRS contains dedicated software accessible by any web browser. This tool
helps the end-user to correctly install the delivery files at the right location on the SRS.

3.5.

PERFORMANCE MANAGEMENT APPLICATION

The Performance Management application offers the following capabilities:


- Collection of measurements from the network elements
- Export of these measurements in XML format
- Call Trace
For more information, on the WQA, NPO and RFO post-processing tools, please refer to the relevant
chapters in this document.
The following figure represents the architecture of the Performance Management Application.

NPO

3GPP XML
Interface

ADI

PDI

APM

MDP

Main Server
FTP Pull from Main

RNC

Node B

MSS based

Figure 4: Performance Management Architecture


The different components are explained as follows:

3.5.1 DATA COLLECTION


The collection of performance measurements is based on the following components:
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

3.5.1.1

23

ACCESS PERFORMANCE MANAGEMENT (APM)

The APM is an Access collector. The Access Performance Manager (APM) collects the data from the
access network elements using FTP. The APM collects the raw data from RNC & NODE B.

3.5.1.2

MANAGEMENT DATA PROVIDER (MDP)

The Management Data Provider is the collector for the Multiservice Switch devices (i.e. without any
wireless specific software).It retrieves Multiservice Switch counters onto the WMS Main Server.

3.5.2 DATA MEDIATION


The performance data collected by APM is converted into the 3GPP compliant XML file format by the
following interfaces and processes:

3.5.2.1

ACCESS DATA INTERFACE (ADI)

The ADI is the interface for Access Performance and Configuration data to the performance reporting
application. ADI mediates counters and call trace data from the devices native format into XML files.
ADI converts the raw data to the XML file format in the 3GPP XML interface directory, and aggregates
the supported performance data into hourly XML files, which are also placed in the 3GPP XML
interface directory.

3.5.2.2

MDP

The CC files collected from the Multiservice Switch based devices are converted into BDF files. After
conversion to BDF, BDF files from Multiservice Switch devices are further processed by the PDI.

3.5.2.3

PACKET DATA INTERFACE (PDI)

PDI converts the files from Multiservice Switch based devices to an XML format. PDI does not perform
time based counter aggregation. However, new functionality on the PDI supports the merging of the
multiple files which a Multiservice Switch shelf can generate within a single 15 minute period.

3.5.2.4

XML COMPRESSION (GZIP)

All XML data interfaces support XML compression. When this is done, files have an added gzip
extension. This is recommended increase storage time on WMS Server as well as to lower bandwidth
requirements for transfers to external OSS. External OSS must be compatible or have a mechanism to
decompress the XML Files.

3.6.

CAPACITY

Engineering Rule: Nominal Server capacity


The Main Server capacities over different nominal platforms (orderable) are defined in the table 1
below.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

24

Hardware Platform (Nominal)


Network Element
M5000 (8 CPU)
+ 1*ST2540 & expansion
Tray ST2501

M5000 (4 CPU)
+ 1*ST2540 &
expansion Tray
ST2501

NETRA T5440
(2 CPU)

T5220
(1 CPU)

50
4000
(24000)

20
2000
(12000)

7
700
(4200)

3
150
(900)

RNC
NODE B
(max 3G cells)

Table 1: WMS Nominal Main Server Capacity

Engineering Rule: Legacy Server capacity


The legacy Main Server capacities over different platforms (not orderable) are defined in the table
below

Network Element

Hardware Platform
SF4900
(12 CPU)

SF4900 (8
CPU)

SF4800 (12
CPU)

SF4800 (8
CPU)

SF v890
(4 CPU)

SF v880
(4 CPU)

N240 (2
CPU)

RNC

30

20

24

16

NODE B
(max 3G cells)

3000
(18000)

2000
(12000)

2400
14400

1600
9600

700
(4200)

550
3300

150
(900)

Table 2: WMS legacy Main Server Capacity

Engineering Restriction: Features restriction on the SF v250, Sun N240 and SF v880
Three features are in restriction currently on the SF v250, Sun N240 and SF v880 due to the 8 GB
RAM limitation. These features are:
- Alarm Correlation
- WMS East-West Interface
Extended Interface for integration into MS-Portal

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

25

Engineering Note: 939x 2 NODE B


The above capacity table does not include the 939x NODE B Network Elements

Engineering Note: Hardware re-use


With the Co-resident FCPS Main Servers feature which removes the need for a separate Performance
Server, the CPU/Memory board of the Performance Server on a SF v880 can be re-used in the Main
Server SF v880 hence increasing the CPU/Memory to 8-CPU/16GB configuration and supporting the
capacity in Table 2 without any feature restriction.

Engineering Note: SFV250


The SF v250 is not mentioned in the above tables but can be used as WMS strictly for trials. SF v250
should not be used in a live network deployment.
The Secondary Main server configuration is not supported on a V250 platform.

3.6.1 DUAL MAIN SERVER CONFIGURATION


This section provides an overview of the functionality of a dual main server and the corresponding
limitations of such a design.
The configuration consists of two Main servers (Primary and Secondary) with only one instance of the
Summary Server Building Block (SUMBB) and one instance of the Historical Fault Browser database
(HFB) residing on the Primary Main server. All WMS clients will communicate and get information from
the SUMBB and the HFB on the primary main server. When the NSP client requires more details than
what is available on the Primary main, it communicates with the TUMSBB and the FMBB of each
server.
The purpose of deploying both a Primary and Secondary Main server is to allow for scalability by the
management of a greater number of Network Elements than could be supported on a single Main
server.
Clients always connect to the Primary Main Server. There is no LDAP server on the Secondary. It is
transparent to the users which server manages each NE. Users will be able to see the alarms and the
configuration for all the NEs. Launching of tools such as MDM will be performed from the associated
main server. For UMTS Access NEs, the associated Main Server is indicated in the UTRAN CM XML
files.
The Dual Main server configuration does not provide any extra level of redundancy than what is
available with the single main server. The Dual Main Server configuration does not provide load
balancing either. It does provide the appearance at the user level of one large server in that
information from the NEs is fed through a single common instance of the Summary Server Building
Block (SUMBB) located on the Primary Main server. Note that if the Primary Main Server is out of
service, the Secondary Server will provide limited services. Refer to section 3.6.3 for more information
with regards to failure scenarios and consequences in the system.
The deployment of a Dual Main server configuration requires careful advance planning to ensure
appropriate NE management and should take into account, among other factors the regional allocation

: The list of 939x Node B models is: 9391, 9392, 9393 and 9396.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

26

of the NEs themselves. Ideally the planning of a Dual Main server deployment will occur during the
CIQ data fill process.
WMS
Client
Primary Main Server

Secondary Main Server

HFB
OSS

3GPP
Interface

SUMBB

Security
FMBB

TUMSBB

TUMSBB

FMBB
XML
Obs files

XML
Obs files
Data
Collection &
XML
Converter

RNC 1

NodeB
1

NodeB
2

Access DA

Access DA

RNC 2

NodeB
3

NodeB
4

RNC 3

NodeB
5

NodeB
6

Data
Collection &
XML
Converter

RNC 4

NodeB
7

NodeB
8

Figure 5: Dual Main Server configuration


Also, the Primary Main Server and the Secondary Main Server should be collocated on the same
network LAN.
Each Network Element needs to be integrated on only one server. The integration of the NEs should
occur in such a manner that groups of NEs going to be managed together, occur in the same server.
This allows a higher number of concurrent user sessions.

Engineering Recommendation: Capacity Load Sharing


It is also recommended to distribute the NEs on both Main Servers, keeping in mind that the Primary
Main Server capacity is reduced when a Secondary Main Server is connected. Capacity numbers
should be reduced by 20% on the Primary Main server when a Secondary Main server is deployed
and may be increased by 20% on the Secondary (with the exception of the SF 4X00 12 CPU platform
which is already at scalability limits)

Engineering Note: SFV250


The Secondary Main server configuration is not supported on a V250 platform.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

27

Customers that deploy a dual Main server configuration should give preference to the secondary
server when deploying NEs and workload across the two Main servers. The number of clients must
still be balanced across the servers.
Engineering Note: Mixed server configurations
The mixed configuration may be required to address a capacity extension scenario by adding a
secondary main server based on the nominal hardware platform.
It is mainly applicable between the same hardware platforms and it is always mandatory to keep the
primary main server model superior to the secondary one to support the module distribution.
For a ROC footprint with a legacy hardware platform SFE4900 or SFV890, the mixed
configuration with a nominal hardware M5000 or NETRA T5440 is not supported.

The Performance Management application individually collects observation files on the primary and
secondary main server which then be ftp to the relevant OSS (including NPO) for post-processing.

3.6.2 SYSTEM MANAGEMENT


System Management solution (SMG) is an administration platform for the WMS solution, built on top of
Sun Management Centre (Sun MC), an SNMP based platform supervision product. SMG offers a wide
range of supervision services, covering hardware, operating system services, applications and remote
systems. Those services are accessible from a graphical user interface (GUI) or a command line
interface.
SMG server is running on the WMS Primary Main server machine. A SMG Agent is deployed in the
WMS Primary Main Server and Secondary (if deployed) Main Server machine. The purpose of the
SMG agent is to collect information and supervise all the hardware and software modules running on
the server it is attached to (e.g.: a WMS application, the OS, a disk, etc...). The relative status and
alarms are pushed to the SMC Server through SNMP protocol.
The SMG server on WMS Primary Main Server includes an Oracle database for information
persistency (topology, alarms).
SMG console is installed in all WMS clients. From any client using a WEB Browser, the SMG can be
reached through for visual purpose only (edit/view).
The following figure shows the SMG architecture:

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

28

Figure 6 : SMG architecture


For complete information of SMG please refer to the Alcatel Lucent 9353 Management System Administration: System Management. (NN-10300-075)
The document also describes the repartition of the WMS process groups on the servers including the
impact when a process is down from a FCAPS (Fault Management, Configuration Management,
Administration Management, Performance Management, and Security Management) perspective.

3.6.3 FAILURE SCENARIOS


In case of dual server configuration, most of the WMS processes are duplicated in both servers to
ensure maximum independence, including better load sharing of the management of the Network
Elements per server.
However, the following key processes are hosted by the WMS Primary main only: GUI application,
3GPP IRP (except the 3GPP PM IRP thats available in both servers), HFB service (Historical Fault
Browser), Activation manager (for CM XML file management), Security framework (e.g.: Radius
service), System Management (SMG), etc
The following table describes the failure scenarios with regards to a given server crash with its
associated consequences.

Server Impacted

Nature of Impact
Loss of all WMS Clients

Loss of FM supervision
WICL
UMT/OAM/APP/024291

01.09/ EN Standard

Comments
Supervision and operation no
more available from all the
WMS Clients.
GUIs are not available.
(SUMBB down)
WICL not available
2009-2010 Alcatel-Lucent

Alcatel-Lucent

29

CM not available

Primary Main server down

CM operation not available


including the usage of the
Activation manager (WPS)
Data congestion
NEs attached to the primary
(except for NE managed by main server can not send data
the secondary main server)
(Notifications,
alarms,
counters and trace files).
The NEs attached to the
secondary still push data to
the secondary main server.
Security - No Radius
for NE supporting Radius
(local connection to NE not
available)
Loss of OSS
- OSS can not connect to
(except for PM IPR in the
the system (Security IRP)
Secondary main server)
- No Notification, FM and
basic CM with OSS
- PM IRP not available in
the primary main server
(still available in the
secondary main server)
HFB

Loss of System Monitoring


Loss of FM supervision
(for NE attached to the
secondary main server only)
CM Not available
(For the NE attached to the
secondary main server)
Data congestion
(except for NE managed by
the primary main server)
Secondary Main server
down

Loss of OSS - PM IPR in the


Secondary main server

Unable to store Historical


alarms (coming from the NE
managed by the secondary
main server)
SMC not available for all the
WMS system
The NE attached to the
secondary main server can
not send alarms to WMS
CM operations against NE
managed by the secondary
main server are not available
NEs
attached
to
the
secondary main server can
not send data (Notifications,
alarms, counters and trace
files).
The NEs attached to the
primary still push data to the
secondary main server.
PM IRP not available in the
secondary main server

Loss of process Monitoring The processes running on the


in the Secondary main server secondary main server are no
more supervised.
Table 3 : WMS failure scenarios and consequences
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

30

3.6.4 ALARM ON THRESHOLD


Main Server capacity numbers assume that the Alarm On Threshold feature is not enabled. It is
expected that Alarm On Threshold can increase server resources usage. Care is required when
configuring this feature to ensure that the thresholds that are set will not generate a flood of alarms
which would have an adverse impact on the system performance.
For Access Alarm on Threshold, the flooding protection limits to 300 the number of alarms raises that
can be generated (raised) by one threshold per Network Element (RNC or NODE B) in one single
evaluation period (an evaluation periods line up with the counter granularity (reporting period) so this
is typically 15 minutes). If the number of 300 alarms against one threshold is reached, then only one
flood alarm is sent.
In defining the thresholds, the following recommendations should be considered on the total number of
threshold alarm events (both raise and clear alarms) which can be generated per evaluation period
(typically 15 minutes). The goal of these recommendations is to avoid producing so many threshold
alarms that they would impact the regular flow of network element alarms (even when the NE alarm
rates are high).
Engineering Rule: Maximum recommended threshold alarms per server type
The recommended maximum number of threshold alarm events per evaluation periods is server type
dependant and are defined in the table below.

Hardware Platform
SF v250, N240, SE T5220

Threshold Events (raise + clears) per


evaluation period
400

SF v880

750

SF v890, 4800-8 CPU, NETRA T5440


4800-12 CPU, E4900 -12 CPU
SE M5000

1000
1500

Table 4 : Maximum recommended threshold alarms per server type

In case of a ROC composed of 2 main servers, it can be assumed that the number in this table can be
applied to each server (primary or secondary).
In assessing the number of alarms events that can be generated by a threshold, the number of
instances that each threshold applies to needs to be considered (for example, a single FDDCell
threshold can have thousands of instances). It is recommended that the threshold definitions be
tested (or simulated) against the actual real values of the counters prior to actual implementation on
the server to ensure that they are well defined and don't produce an excess of alarms. The fact that
the threshold evaluation was done over a network which was probably in normal running condition
should be taken into account also in trying to assess worst case conditions and threshold alarm rates
(some network conditions could have impact and increase rates beyond what was measured with
sample data).
In implementing thresholds, it is recommended that a progressive approach be taken when setting the
thresholds values (i.e. setting them initially to a threshold crossing value which generates few alarms
and then adjust the threshold crossing value incrementally over a longer period of time). Also, for
UMTS Access, the use of the hysteresis capability of the thresholding feature can be useful, especially
when the threshold crossing value is somewhat closer to the normal average value of the counter or
metric against which the threshold is defined.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

31

Some examples for the evaluation of the impact of the use of this feature are given below and are
specific to UMTS Access.
Case Example 1:
An operator wants to define 3 different thresholds against some specific FDDCell counters and 5 other
thresholds against base RNC counters. The network is composed of 1500 NODE Bs with 3 cells each
and 15 RNCs (100 NODE B per RNC). The concern is if these thresholds could have an impact on the
main server.
Assessment No. 1: worst case with flood alarms
A very extreme worst case scenario would be for all the threshold instances to raise a maximum
number of threshold alarms simultaneously (in one evaluation period). In this particular case, the
FDDCell thresholds would reach their limit of 300 alarm events per RNC and would all be replaced by
1 flood alarm.
The 5 RNC thresholds can't generate more then 15 alarms each (15 RNCs total) and the 3 FDDCell
would only generate 1 alarm per RNC each So in this worse case analysis, these definitions would
generate a burst of 15 RNCs x 8 alarms = 120 alarms.
Assessment No. 2: worst case without flood alarms
This assessment shows a worst case analysis which is based on scenarios which generate the
maximum number of alarms (in one granularity period) without generating a flood alarm. In this case,
only the FDDCells thresholds can reach a possible amount of 299 alarms raise instances per
threshold in one interval. Since there are 15 RNCs and 3 FDDCell thresholds, such a worst case
scenario would yield 15 (RNC) x 3 (thresholds) x 299 (alarms) =13455 alarm raise events! This by far
exceeds the limit of what is recommended to generate on any server type (the impact of such a burst
would be that other alarms from NEs could be delayed by many minutes). This example goes to show
that the best way to do these type of assessments is per the technique used in the case study 2 below
and based on probabilities rather than on worst case scenarios.
We continue this case assessment assuming that we have done a more detailed assessment into the
behaviour of the nature of the alarms generated by these particular threshold definitions and we have
found out that it would be practically impossible that the 3 FDDCells thresholds would generate a high
number of alarms on more than 1 RNC at any point in time. So this means that the maximum number
of threshold alarms which could be raised in one period becomes 1 RNC x 3 threshold x 299 alarms =
897 alarms, a number which can be managed on servers of type 890, 4800 and 4900.
Case Example 2: (recommended assessment methodology)
An operator is interested in implementing many thresholds on a series of FDDCell based counters on
a SF 4800 based ROC which is managing 6000 cells. Operator sets the threshold crossing values in
such a way that under normal conditions only 0.1% of components (cells) have a threshold alarm
raised against it. To be safe, the operator assumes that in some extreme conditions, this number can
increase 20 times (to 2%). It has been observed that when a threshold is raised it normally stays
raised for 2 intervals. It has also been observed that threshold crossings are statistically independent
from one cell to another and more or less uniformly distributed over time (this is to keeps this example
simple)
In this case, using the assumed worst case value of 2% of 6000 cells, we have at any point in time
120 cells which are in an alarmed state for each threshold. The average hold time of these alarms
lasting for 2 periods means that we will have 120 raise alarm events and 120 clear alarm events per 2
periods, so 120 alarm events per period.
The maximum recommended value for the number of
threshold alarms for a SF 4800 main server is 1000. We could therefore support 8 of these types of
thresholds, a number which is below the maximum number of thresholds which can be applied to the
FDDCell counter group.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

32

3.6.5 ALARM CORRELATION


Alarm Correlation is a new optional feature in OAM06 based on rule-based correlation that consists of
running pre-defined rules in order to exhibit dependencies between alarms and group in Correlation
Groups them by setting the same CorrelationGroupId attribute.
Correlation rules group alarms when at least 2 uncorrelated alarms or more could be correlated, one
of them is declared Primary Main alarm (the main root cause). Other alarms of the group could be
declared Main (other possible root causes) or Symptom (consequence of Main or Primary Main
alarm). In case of multiple root causes alarms the first received by the OAM is declared Primary Main
and others are declared Main.
WMS client
Correlated alarms visible in
Alarm manager window

Alarms ->
Update with
correlation fields <-

NMS Layer
WMS platform
alarm
management

Alarm correlation modules

Alarm
Correlation
engine: CA
(Correlation
Asset)

Topology

WMS Main Server

Utran alarms

EMS Layer

Utran
Topology
links

Utran EMS
alarm
management
CM XML
snapshot

Events

Utran
Predefined
Rules

Topology
extract script

Events

Node B

RNC

Figure 7: Alarm Correlation Functional Diagram


The correlation rules engine uses the topology information within the network that is managing.
Topology data is relationships information about the physical and logical components of a network
(link, containment, association). It gathers the topology information by generating Correlation Asset
(CA) topology files from the CM XML snapshot either automatically (by scheduled cronjob every night)
or manually by a WICL command. The following files will be generated by the topology extractor script:
- contain.dat: Identifies containment relationships
- assoc.dat: Identifies associative relationships
Topology data is stored in ASCII files to be loaded in the Alarm Correlations topology database.
The Alarm Manager window uses the following new alarm fields for colouring, filtering, sorting, and
expanding alarms and correlation groups:
-

CorrelationGroupId: Correlation group identifier. This field is a string max length 10 characters (set
to -at GUI level if not correlated).

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

33

CorrelationAlarmType: Primary Main or Main or Symptom (set to - at GUI level if not correlated)

CorrelationGroupCount: Alarms number of the group; Main & symptoms. The primary Main alarm
is not counted. This value is only available on the Primary Main alarm (set to -at GUI level if not
correlated).

Engineering Recommendation: Alarm Correlation Engineering Considerations


-

Alarm correlation rule groups are pre-defined i.e. the predefined rules group cannot be changed.

There is no alarm suppression.

Topology changes like NODE B re-parenting, NODE B addition, etc in the day are not
automatically taken into account by the alarm correlation feature. The user would need to launch
manually the topology extractor script or wait to automatic extract during the next night.

There are no predefined rules for POC and Transport Node 7670.

A WMS ROC can support up to 20 alarm correlation rules.

3.6.6 AUDIT TRAIL


Although more events are being logged when this option is set to Level 3, this will not have a
significant impact on server resources and thus will not impact the capacity specifications.

3.6.7 SOFTWARE DOWNLOAD


The SRS functionality on the Main server allows download of software via e-delivery. Software for the
RNC/NODE B can be downloaded from the SRS and then downloaded to the respective NEs via the
Main Server. Parallel software download to multiple NEs of the same type is supported as mentioned
in the following table. User can select more NEs than in the table but they will get queued to not
exceed the values of parallel FTP transfers.

Hardware Platform

Node B

RNC

SE M5000 (8-CPU)
SE M5000 (4-CPU)
NETRA T5240 (2-CPU)
SE T5220

48
32
32
8

6
4
4
1

Table 5 : Simultaneous software downloads to Access NE (Nominal machines)

UMT/OAM/APP/024291

Hardware Platform

Node B

RNC

SF E4900 (12-CPU)
SF4800 (12-CPU)
SF E4900 (8-CPU)
SF4800 (8-CPU)
SF V890 (4-CPU)

48
24
32
16
16

6
3
4
2
2

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

34

SF V880 (4-CPU)
SF V250/N240 (2-CPU)

8
4

1
1

Table 6: Simultaneous software downloads to Access NE (legacy machines)


Typical software size per Network Element type is provided in the table below (this is for UA5.x, RNC
1500).
NE Type
Software size (MB)
NODE B
60
RNC
350

Table 7: Typical software size per Access NE

3.6.8 PERFORMANCE MANAGEMENT


3.6.8.1

NETWORK ELEMENT OBSERVATION FILES GRANULARITIES

Table below shows the data granularities supported by this release and used to determine the storage
and capacity considerations of the server. Data granularities are the rate at which the performance
counters can be generated by the network elements and is usually the rate at which performance data
can be transferred from the network element to the server.

Network Element
Type

Granularity (Minutes)

NODE B

15, 60

RNC

15, 30, 60

Table 8: Supported GPM data granularities


15 minute NodeB granularity is available for NodeB in UA06 and not applicable to previous releases of
NodeB. The default granularity of the NodeB is 60 minutes. It is recommended to leave the NodeB
granularity at default to optimize storage and bandwidth utilization.
Engineering Recommendation: Setting Granularity Period on NodeB
Granularity period of 15 minutes can be set on individual NodeB. It is recommended that unless
troubleshooting a particular NodeB, the granularity period for collecting counters should be set to the
same period for all NodeB to avoid misalignment when performance reporting.

3.6.8.2

RNC COUNTER LIST MANAGEMENT

In OAM06, RNC introduces a counters list mechanism, with counters list management which allows
users to specify the list of counters to be activated on RNC, and to have the collection and mediation
layer of WMS aligned with counters list management. The counters list is defined through csv
formatted ASCII files and activation of the counters list using the ASCII file is done through WICL.
The advantage of this feature is two fold:
- To have RNC dedicating its resources to call management, as opposed to call monitoring.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

35

Some counters are linked to a UTRAN feature. If the feature is not activated, the counter becomes
useless. Some other counters need to be activated at the time a new feature or functionality is
introduced or for a specific optimization service but does not need to be active from a day to day
network operational basis. This allows the customer to select what they want.

The counters list csv file has several parameters per counter including isActivated, group, name,
measuredobjectclass, weight and priority. The user can estimate the weight associated to its
customized RNC counter list by using the weight values of the counter whose isActivated value is
set to Y, supposing all counter from the same group have an identical isActivated value.

Engineering Note: Scope of Counter selection


With RNC in UA6.0, the selection is available at group level (A group of counter gathering the
counter itself including its associated screening if any). If the isActivated field is set to Y for a given
counter, all the counters sharing the same group field value will be activated. This means that setting
a single isActivated field value to Y will impact all the counters from the same group. (For clarity,
the user is advised to set the isActivated of all the counters sharing the same group to Y when at
least 1 counter from this group needs to be activated)

The total weight of the RNC counter list can be estimated by considering only the counters whose
measuredObjectClass field is set to RNCFunction/Cell, summing up the value of their weight
field, and multiplying the result by the number of cells configured or projected for the RNC
considered.
The RNC counter list total weight can then be compared to the RNC max counter capacity. The
RNC max counter capacity depends on the RNCs INode platform type. The RNC platform type is
given by the value of the INodes attribute EM/RncIn/hardwareCapability, available through
WICL.
o Example - For the INode platform type all6mPktServSP (PSFP, CP3, 16pOC3STM1),
the RNC max counter capacity is of 4.75 million counter instances

Engineering Note: RNC Counter List Management


-

Note that even if counters are deactivated in the csv counter list file, they will still appear in the
3GPP observation xml files produced by WMS with null values.

If the number of cell-counters records reaches or exceeds 80% of the RNC capacity limit, the RNC
raises a Warning alarm.

The priority field value indicates the priority associated to the counter, as defined by R&D, and
implemented in the UA06 RNC. The higher the priority value, the less important the counter is
considered. In case of resource shortage, the RNC will stop collecting counters from high priority
values.

If the isActivated field is set to Y for a given counter, all the counters sharing the same group
field value will be activated. This means that setting a single isActivated field value to Y will
impact all the counters from the same group. For clarity, the user is advised to set the isActivated
of all the counters sharing the same group to Y when at least 1 counter from this group needs
to be activated.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

36

Engineering Recommendation: Overall Monitoring


This feature can be set on individual RNC. It is recommended that unless troubleshooting a particular
RNC, the activated counters list should be set identically to all RNC to avoid misalignment when
performance reporting.

Engineering Recommendation: Activation and Capacity rule


Even if it is still possible to configure the list of counters and counters families through the regular way
of provisioning (WIPS, Object Editor, WICL..) of the RNC by modifying the counterIdList/familyIdList
PerfScanners attributes, it is highly recommended to use the WICL command dedicated to Counters
List Management in order to avoid any unexpected counters drop by RNC.
Prior to the counter list activation, it is highly recommended to estimate the load of the list through a
Check command. When using the check command, the end user is invited to specify the projected
cells likely to be configured at most for this RNC and the RNC platform type. Based on those
information WICL is able to estimate the total weight associated to a given counter list file, expressed
in percentage of the RNC max counter capacity.
A counter list should not be activated if the load on the RNC exceed 80%

3.6.8.3

RECOVERY (CATCHUP TIME RATIO)

For a server which is fully loaded with the number of NEs, the catch up time ratio is around 1:1. This
means that if the server is down (outage, network outage, patch installation, maintenance, etc....) for
one hour, it will take one hour to catch up (or 1 day of catch up for 1 day down). Once caught up, the
server is back to its normal steady state and all the XML files are delivered as per their normal
schedule.
Note that in some special circumstances like during intensive use of RNC call trace, the time required
to recover from an outage can increase. For planned outages (for example during an upgrade), it is
recommended to stop call trace sessions on the RNC before the outage.

3.6.8.4

XML FILE STORAGE CAPACITY

XML File compression (gzip)


All data mediation processes have the capability to generate compressed XML files, including UMTS
Access Call Trace. The compressed files will be in the gzip format (with.gzip extension).

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

37

Engineering Recommendation: XML File Storage


It is recommended that compressed gzip file format always be used for storing performance
management files specially when NodeB granularity period is set to 15 minutes as there are several
advantages:
-

more XML files can be stored

less bandwidth is required to transmit them over the network,

backup and restore is faster

NPO is compatible with XML files in gzip format and there will be no performance impact on NPO to
read compressed XML files.
As a general rule, the compression ratio achieved (gzip format) is typically of 90% or better so up to
10x more data could be stored using this format.
Global WMS purge functionality
The purging algorithm is applied on the Main Server to global partitions. The XML storage (for
observations/counters and call trace data, etc.) is stored in the /opt/nortel/data partition and the purge
algorithm will attempt to maintain this global partition at 80% of usage. This leads to a more efficient
usage of the disk space so that in general it is expected that a WMS server is capable to store more
days of XML data than what was possible in previous releases.
The purge functionality can possibly bring on some noticeable changes in the amount of days stored
for different type of networks. One reason for this is that the number of days of storage of XML data
which can be kept is dynamically assessed so this parameter can actually vary over time. Also, the
amount of storage days is applied uniformly across all data on the server. In all cases, WMS server
should be able to keep a minimum of 3 days of XML data

3.6.8.5

UTRAN CALL TRACE

WMS server supports the following types of call trace given in the table below:
Call Trace Session
Neighbouring Call Trace
CTn
Core Network Invoked Call Trace
CTa
Access Invoked Call Trace
CTb
Geographic Call Trace
CTg

Purpose
To trace mobility specific events and trace handovers
between neighbouring cells
To trace one or several UE calls selected by the Core
Network and to trace UE emergency calls
To trace dedicated data for calls based on a predefined UE
identity (TMSI, P-TMSI, IMSI or IMEI)
To trace dedicated data for calls established within a
geographical area in the UTRAN (may be a cell, a set of cells
or all the cells in the RNS)

Object Trace on the cell object


OTCell
Object Trace on IuCs object
OTIuCs
Object Trace on IuPs object
OTIuPs
Object Trace on IuR object
OTIuR
Object Trace on IuBC object
OTIuBC

To trace data related to a cell or several cells (i.e. NBAP


common measurements: Transmitted Carrier Power, RSSI
To trace common Iu CS data (i.e. not linked to a given call) RANAP messages on Iu Cs interface.
To trace common Iu PS data (i.e. not linked to a given call) RANAP messages on Iu Ps interface.
To trace common Iur data (i.e. not linked to a given call) RNSAP messages on Iur interface
To trace common (i.e. not linked to a given call) SABP
messages on IuBC interface

Table 9: Call Trace Type Definitions


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

38

Call Trace functionality generates a large number of records at the RNC that are post processed by
WMS APM and ADI modules running on the WMS server. As an example the RNC can generate a few
MegaBytes (up to 5.0 MB) of call trace data per minute.
Table below contains call trace engineering guidelines for OAM06.
Server type1

Call Trace volume


per day (MBytes of
XML) 2

Minutes of traced
calls per day3

V250/N240
V880-4 CPU-900 MHz
V880-4 CPU-1200 MHz
V890-4 CPU-1200 MHz
4800-8 CPU-900 MHz
4800-8 CPU-1200 MHz
4800-12 CPU-900 MHz
4800-12 CPU-1200 MHz
V4900-8 CPU-1200 MHz
V4900-12 CPU-1200
MHz

3000
5700
7500
15000
11500
15000
17000
23000
30000
46000

3100
6000
8000
16000
12000
16000
18000
24000
32000
48000

Monitoring
guidelines
Maximum Nominal
Call Trace CPU
usage4
7.50%
6%
6%
6%
6%
6%
4%
4%
3%
2%

Table 10: Call Trace Engineering guidelines and daily recommended volumes of data

Engineering Note: Call Trace Engineering


Engineering Note 1: For servers with clock speeds which are higher then those specified, use
numbers corresponding to highest CPU speed available in the table (i.e. 1200 MHz specifications)
Engineering Note 2: Call Trace Volumes per day are measured from the XML files produced in non
compressed format. (As a rule of thumb the compression ration for CT XML data is around 90% but
actual number can vary).
Engineering Note 3: Minutes traced per day are for CTb or CTg using troubleshooting template.
Actual volume of data which can be generated can vary depending on many factors so these numbers
are considered examples in the case of typical usage.
Engineering Note 4: For more details on monitoring guidelines, see text below
In addition, the following guidelines shall be applied when configuring Call Trace sessions:
-

For CTa or CTb, the Main server can process simultaneously the Traces of 10 identified user
equipments.
For CTg or CTn or OTCell, the Main server supports number of RNCs / 10 CTg sessions
simultaneously active, with a minimum of 1 session. Note that 1 RNC supports only 1 CTg or CTn
or OTCell session at a time but can run CTg and CTn sessions simultaneously on the same RNC
(restricted in UA05)
The maximum number of simultaneous calls a CTn session can trace per TMU is 300 calls. For a
fully configured RNC (12+2 TMU), the maximum number of simultaneous calls 300x12 = 3600
calls.
Intensive usage of call trace counters should be avoided.
For planned server outages: it is recommended to stop call trace sessions prior to any shutdown.

In general, there are no software controls implemented on the Call Trace Wizard or the NEs to ensure
that user complies with most of the recommendations in this section.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

39

Call Trace monitoring guidelines


The above table proposes CPU limit values (in percentage) for those who want to use other call trace
scenarios (example: combinations of different call traces, or usage of CTa or OTcell) or fine tune their
current call trace scenarios without causing impacts to the Main Server applications. The values in the
monitoring column correspond to the normal recommended level of CPU usage of the
initadi_calltrace.x process (the process which is responsible for the XML conversion of the call trace
data). When running below the specified level of CPU usage, there should be no impact on other
functions.
The call trace processing CPU level (initadm_calltrace.x process) can be monitored with standard
Solaris tools such as prstat. As an example, the command "prstat -c -p<PID> 60 1440 >
CallTraceCPU.log & " will log in background the CPU usage of call trace in the specified file at a rate
of one sample per minute (acceptable sampling rate for this type of exercise) during an entire day.
<PID> must be replaced by the actual process number of the initadm_calltrace.x process. For more
information, see man page on Solaris for prstat and ps.
In general, the use of these guidelines will allow reaching higher volumes of call trace data then what
is in the "CT XML MB per day" or "traced call minutes" column.
It is also allowable to exceed this value of CPU usage for a short period of time (e.g. 1 hr.). However,
note that in many cases, at 2x this value, queuing of call trace data can be occurring and therefore this
situation should not be sustained for a long period of time (such as more then one hour at a time).
Note that engineering services are also available to help you assess how to maximize your call trace
usage without impacting the other applications on your servers (this will be based on the principles
described in this section).

3.6.9 OTHER ENGINEERING CONSIDERATIONS


3.6.9.1

MAXIMUM NUMBER OF STANDING ALARMS

The number of standing alarms is defined by the number of active alarms present in the global layout
of the NSP GUI i.e the total number of alarms as indicated by the network banner in NSP for the entire
network. A very large number of active alarms in the NSP GUI can use server resources such as
memory and can cause degradation in server performance that can lead to an increase in alarm
latency, a slowdown of server response time, loss of alarms and issues with alarm re-synchronization.
Accordingly, limits have been set in table below "Maximum number of standing alarms per hardware
type" on the maximum number of standing alarms supported on the server at any one time. These
limits will vary based on the WMS server hardware type. The operator must manually clear nonessential alarms on a regular basis to help ensure that this maximum number is not reached.
Main Server type
SF E4900 & SF 4800
& M5000
SF V890 & SF V880 &
T5440
SF V250 & N240 &
T5220

Maximum Number of Standing Alarms


20000
10000
4000

Table 11: Maximum number of standing alarms per hardware type


In case of a dual main server ROC, it can be assumed that the number in the table above can be
applied to each server (primary or secondary) however the total number of standing alarms in a single
ROC can never exceed 20000. For example, if the customer has a ROC composed of 2 SF V890
main servers then 20000 standing alarms can be supported.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

3.6.9.2

40

NSP SYNCHRONIZATION DELAY

After a system restart, the NSP GUI will not display all the NE and alarm information until all DAs have
"synchronized" with the building blocks.
The time it takes for the synchronization to complete varies. The delay is primarily dependant upon:
- The number of NEs being managed
- The number of active alarms
- The number of DAs registered with TUMSBB
- Where each NE is in their polling cycle
The DAs synchronize with each NE individually. When the last NE is synchronized with it's DA, that
DA performs another update of NE and alarm information before passing a "synchronization complete"
message to TUMSBB. Sometimes the DA must wait for the next polling cycle for an NE, before it can
synchronize that particular NE.
When TUMSBB receives a "synchronization complete" message from all DAs, then TUMSBB sends
its synchronize message to SUMBB. It is only when the SUMBB synchronization is complete that all
NE's are guaranteed to appear in their layouts, with ACIs applied, etc.
Understanding how the synchronization process works, and all the variables that effect the time of this
process, it is understandable that synchronization time will vary from system to system.
After a system restart it is recommended to allow 10 minutes for the synchronization to complete
before performing any actions.
It has been observed in some large networks with 10000 to 20000 Active alarms, or when receiving
high alarm rates, that the synchronization process can take in excess of one hour.

3.6.9.3

RECOMMENDED MDM GMDR ALARM LIST SIZE

The Multiservice Switch and SNMP alarms are stored in the MDM GMDR on the Main Server(s). By
default, GMDR stores up to 6000 alarms. If the number of active alarms is more than the GMDR alarm
list limit, some active alarms will be lost from the alarm list on the Main Server.
It is important that the number of active alarms does not exceed the GMDR limit, so that active alarms
are not lost. To avoid the number of active alarms reaching GMDR alarm list limit, it is recommended
to monitor the number of active alarms periodically using the GMDR Administration GUI, which is
launched from MDM Toolset via NSP GUI. The operator may be required to take necessary actions to
reduce the number of active alarms if the number of active alarms is close to the configured GMDR
maximum number of alarms or to increase this limit.

3.7.

BACKUP AND RESTORE

Following types of backup are supported as part of the WMS Main Server (Primary and Secondary) to
local tapes:
-

System Backup & Restore


Essential data backup and restore
Non essential data backup and restore
Essential and non essential Data Backup & Restore
Historical Data Archive & Retrieval

In addition to the local backup and restore solution, centralized backup and restore of the WMS
servers using VERITAS NetBackup 6.0 is also available. Using a Veritas NetBackup 6.0 DataCenter
server, it is possible to perform all the B&R operations previously mentioned.
Please refer to Wireless Network Management System Backup and Restore User Guide, NTP NN10300-035 for more information on the backup and restore procedures including the centralized
solution.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

41

System Backup & Restore


The system backup is used to backup the operating system, applications and OAM system
management data. The restore must be on the same hardware platform where the backup was
originally performed.
In this release, the requirement to have the WMS server to be offline for a short time has been
removed:
Engineering Note: System Backup & Restore
The server is down during the restore operations.

Essential and non essential Data Backup and Restore


This type of backup is used to save the application data, configuration data, the software repository
data, and historical data. This operation is done online such that the users can continue to view alarms
and performance data for their network, although certain restrictions apply.
The server is however down during the configuration data restore operation.
The essential and non essential data backup and restore procedure can be used for disaster recovery
in order to save and recover the essential and non essential data.
Historical Data Archive and Retrieval
This type of backup is used to save historical data. Historical data consists of HFB alarms, audit trail
data, CSAL, OAM stability data, NE-based stability data, XML performance files and performance
counters/metrics. The historical data archive and retrieval are both performed online. The functionality
to save the historical data is new in this release. The historical data retrieval is performed online as
opposed to the other types of restore.

3.7.1 B&R ENGINEERING CONSIDERATIONS


-

To maintain consistency and integrity of the data on the WMS servers; when a Backup or Restore
is performed on one server, it must also be performed for all the other servers in the ROC.

The system restore also requires the Boot CD.

There is one SUN StorEdge tape drive per server standard equipment.

It is recommended to perform a system backup at least once after installation of every release.
Performing a system backup is also suggested after installation of important set of patches.

It is recommended to backup the essential data at least once a day. It is possible to increase the
frequency of data backup to daily since it is done online.

It is recommended to perform a historical data archive once a day.

The capacity of a DDS5 tape is 36GB and the capacity of a DAT 72 tape is 36 GB. The maximum
data transfer rate of SUN StorEdge tape drives before data compression is 3 MB/s.

Data compression ratio can vary depending of the type of data. Assuming a compression ratio of
2:1, 72 GB could be stored per DDS-5 tape and 72 GB could be stored per DAT 72 tape. The data
transfer rate would be 6 MB/sec.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

3.7.2

42

TAPE DRIVE & CARTRIDGE


Tape drive and cartridge are optional 3 equipments, but are highly recommended when purchasing
server in order to guarantee a minimum of business continuity (backup and restore usage).
Several type and solutions of tape are proposed:

DAT 72 4: The Standalone Rack Mount Dual DAT72 tape drive (available in AC or DC power) is
delivered with one DAT72 tape. DAT 72 solution is mainly proposed with the small WMS solution
based on NETRA 240 hardware. (Tape drive DAT 72 is per default delivered within WMS Sun Fire
SFV890 and E4900).

LTO4HH: The Sun Storage TEK HP LTO4 Half Height SAS Tape drive (AC power) is delivered
with one tape drive. Another tape drive (Sun Storage TEK Bare HP LTO4 drive) can be purchased
such as to be installed within the Sun Storage TEK HP LTO4. Each Sun Storage TEK HP LTO4 tape
drive has a capacity of 800GB.
LTO4HH is mainly proposed to the small scale server. If a standalone LTO4 bare is added, the
resulting Sun Storage TEK HP LTO4 box can be shared by two servers (.e.g.: two SE T5220), each
server being connected to a LTO4 Tape drive. LTO4HH is also applicable to a medium MS NPO
based on M4000 2CPU.
SL24 LTO4HH: For more simplicity and value of high-capacity automated backup and recovery, the
SL24 LTO4HH SAS Tape autoloader can be used. It enables to automatically reinsert tape in the
drive when the previous has been ejected. The SL24 arrives rack-ready for installation into a standard
19-inch rack, or you can use an optional kit to integrate it into a tabletop environment. The SL24 ships
with one drive and include two removable 12-slot magazines with one slot dedicated to import/export
of data cartridges.

The table below describes the compatibility between tape solution and a server model according the
domain (MS-NPO, WMS, etc...)

Tape Drive

SDLT600

DAT 72

LTO4H

SL24 LTO4HH
SAS Tape
autoloader

Domain

Server Model

WMS

NETRA 240
(NEBS product)
SF V890

N.A

Recommended

N.A

N.A

N.A

Recommended (preinstalled)

N.A

N.A

SF E4900

N.A

Recommended (preinstalled)

SE M5000

N.A

N.A

N.A

Recommended for
automatic
cartridges
management

: Please contact your Alcatel Lucent representative to identify the Backup and
restore solution that suits with customers infrastructure.
4
: DAT 72 is NEBS compliant
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

43

SE T5220
NETRA T5440

N.A
N.A

N.A
Recommended
DAT 72 recommended N.A
with T5440 to address
full NEBS
requirements
(DAT 72 available in
AC or DC)

NETRA 240
(NPO only)

Supported

Recommended

N.A

N.A
Recommended for
automatic
cartridges
management and if
the full NEBS
requirements is not
required.
(In case of T5440
in DC, make sure,
power converter
DC to AC is
available on site
for the SL24
LTO4H)
N.A

SE T5220

N.A

N.A

Recommended

N.A

SFV490 -2CPU
(NPO only)
NETRA T5440
(MS SUP only)

Recommended N.A

N.A

N.A

N.A

Recommended
(In case of T5440 in
DC, make sure,
power converter DC
to AC is available on
site for the LTO4H)
N.A
Recommended
Supported 5

N.A

(NEBS product.
Available in AC or
DC power)

NPO
MS-NPO
MS-SUP

(NEBS product.
Available in AC or
DC power)
SFV490 -4CPU
M4000 - 2CPU
M4000 - 4CPU

N.A

Recommended N.A
N.A
N.A
N.A
N.A

N.A
N.A
N.A

Table 12: Tape drive and Domain/Server matrix compatibility

3.7.3 BACKUP TIME ESTIMATION


-

As a guideline, a system backup is expected to take approximately 2 hours.

The time required to perform an essential and non essential data backup or historical data archival
is dependent on the amount of data accumulated on the servers. Therefore, it is not possible to
give with precision the time required to perform data backups. The minimum time can be
estimated if the amount of the data to be backup is available:
Time in hours = x MB / (4.5 MB/sec * 3600)

The restore time can be estimated to be about 15% more than the backup time + reboot time of
the server.

: Local Backup solution trough Tape drives is not recommended for large MS-NPO. Local backup to
disk, or a Centralized Backup & Restore solution with partner (e.g.: LEGATO, or VERITAS
infrastructure) has to be considered.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

3.8.

44

INTEGRATION OF WMS TO A STORAGE AREA NETWORK


(SAN)

WMS server supports Direct Attached Storage (DAS) where disk arrays are connected directly to the
SF E4x00 servers. A new feature in OAM06 allows integration of the WMS AS E4x00 servers to a
Storage Area Network (SAN) to provide a more flexible solution to the customer.
With the SAN solution, the /opt/nortel/data partition is transferred to the SAN disk volumes while
keeping the internal system and application data on local disks of the host server.

WMS ROC
secondary
MS

primary
MS

2 * Dual FC optical
ports HBA cards

Redundant Fibre
Channel Switches

Metadevice #1

Metadevice #2

LUN@1

LUN@2

LUN@3

LUN@4

LUN@5

LUN@6

(virtual disks)

(virtual disks)

Figure 8: Storage Area Network Architecture


In this context, it is expected the SAN solution including the disk volumes and fibre channel switch
connectivity is provided by the customers existing IT network. The migration of existing WMS to SAN
or integration of new WMS server to SAN is provided as a service by Alcatel-Lucent to the customer.
Please contact your Alcatel-Lucent representative for more information.
The customer SAN solution needs to respect the following rules when integrating to WMS:
-

The SAN is expected to support Sun hardware platform

The SAN is expected to support Sun StorageTek Traffic Manager software (STMS), formerly Sun
MPxIO for multipathing. This is integrated in the Sun Solaris 10 Operating System.

The SAN is expected to support Solaris format command with EFI (Extensible Firmware
Interface) label

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

45

Solaris Volume Manager (SVM) is used for concatenation and mirroring

The operator is expected to provide 2 or more SAN volumes/LUNs (Logical Unit Number)

Each LUN should be able to be accessed through an even number of paths

The customer is expected to group these volumes in 2 sets for SVM sub-mirror to optimize
performances, load balancing, redundancy and upgrade downtime

Each SAN set should be 1.5 TB large (and less than 2 TB)

To guarantee nominal WMS KPIs, the SAN should support:


o Write: IOPS (Input/Output Operations Per Second) 3000, MB/s: 95
o Read: IOPS: 3000, MB/s: 95

The bandwidth required between the SAN and WMS servers are as follows 1Gbps for SF E4800
and 4 Gbps for SF E4900 WMS servers

The customer is expected to use WMS Backup & Restore scripts

The migration to a SAN environment will be done post upgrade to OAM06

SAN is supported on the following WMS hardware platforms SF E4x00 (8-CPU and 12CPU) and SE M5000

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

46

4.

WMS EXTERNAL INTERFACE ENGINEERING


CONSIDERATIONS

4.1.

OVERVIEW

Third Generation Partnership Project (3GPP) is collaboration among standards development


organizations, and other related bodies, for the creation of a complete set of standard technical
specifications, globally accepted for 3G based systems.
The following 3GPP Integration Reference Point (IRP) solution sets are supported through the Itf-N
(Interface-Northbound).
-

3GPP R6 Entry Point IRP - for discovery of outline information of IRPs


3GPP R6 Alarm IRP - for Fault Management.
3GPP R6 Communication Surveillance IRP - for Itf-N monitoring
3GPP R6 Basic CM IRP - for Configuration Management (synchronous operations)
3GPP R6 Kernel CM IRP - for Configuration Management (notifications)
3GPP R6 Bulk CM IRP - for Bulk Configuration Management (limited to UMTS Access domain).
3GPP R6 PM IRP - for Performance Monitoring (limited to UMTS Access domain).
3GPP R6 File Transfer IRP - for Files exchanges through the Itf-N
3GPP R6 Notification IRP - for IRP Manager(s) to subscribe to 3GPP Building Blocks (BBs) for
receiving event notifications.

The OAM solution also offers support for 3GPP compliant (XML format) performance management
interfaces.
In addition, Alcatel-Lucents 3GPP offering includes implementation of the Alcatel-Lucent-specific
Security BB, which is mandatory for all customers. The Security BB authenticates the IRP Managers
identity and provides the authenticated IRP Manager with the Inter-operable Object Reference (IOR)
of the Entry Point IRP Agent.
For detailed information on the 3GPP External Interfaces, please refer to the following Technical
Manuals:
-

NN-20500-068 - Alcatel-Lucent 9353 Management System - 3GPP Alcatel-Lucent Specific


Interfaces Specification
NN-20500-074 - Alcatel-Lucent 9353 Management System - 3GPP Building Block Configuration
Guide

4.2.

THE ALCATEL-LUCENT SECURITY BUILDING BLOCK

Alcatel-Lucent-specific Security BB provides user authentication; successful authentication will enable


retrieval of the Entry Point IRP Agent IOR. Consequently, before an IRP Manager can access the
Entry Point IRP and eventually the 3GPP functional BBs, it must first communicate with the AlcatelLucent Security BB and provide its user account and password. In turn, the Security BB will validate
the user through the WMS Security Services Framework. If security clearance is obtained, the Security
BB will return the Entry Point IRP and its IOR. An exception will be raised, to the IRP Manager, if the
identification information provided is invalid.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

47

The 3GPP Entry Point IRP


The 3GPP Entry Point IRP provides an access point to the other functional BBs (Alarm IRP, Basic CM
IRP, Kernel CM IRP, Bulk CM IRP, PM IRP and Notification IRP). The use of the Entry Point IRP is
mandatory in this release and the IRP Managers can only access the functional BBs via this entrance
point.

4.3.

THE 3GPP NOTIFICATION BUILDING BLOCK

The 3GPP Notification BB implements the CORBA interface needed to access required event
channels. There is one event channel per notification category (e.g. Kernel CM event is one
notification category while Alarm event is another notification category). There is only one Notification
BB per 3GPP BB instance.

4.4.
BB)

3GPP FAULT MANAGEMENT BUILDING BLOCK (3GPP FM

The 3GPP FM BB is Alcatel-Lucent implementation of the 3GPP Alarm IRP Agent CORBA solution
set. The purpose of the Alarm IRP is to define an interface through which an Alarm IRP Agent can
communicate alarm information (for its managed objects) to one or several Alarm IRP Managers.
The 3GPP FM BB connects to the WMS underlying system instances within a ROC; collects alarms,
does the mediation, and forwards 3GPP-formatted alarms to Alarm IRP Manager(s). The BB also pulls
periodically on detection of new alarm related events.
The 3GPP FM BB communicates with the 3GPP Notification BB informing it of all alarm specific
events. Alarm events are then propagated to the subscribed Alarm IRP Managers by the Notification
Service.
The 3GPP Communication Surveillance IRP
As part of the 3GPP FM BB, Alcatel-Lucent has also implemented the Communication Surveillance
solution and supports the sending of the notification notifyHeartbeat through the 3GPP FM BB
channel. This feature provides the Alarm IRP Managers the ability to monitor the communication
between the 3GPP FM BB and itself through notification channels of the CORBA Notification Service.
The solution consists in periodically broadcasting, if activated, a specific standard notification called
notifyHeartbeat to all subscribed Alarm IRP Managers.
Figure below illustrates Alcatel-Lucents 3GPP FM high level architectural implementation.
3G PP Alar m
I RP M an age r
3 GPP stan d ar d
C ORB A In te rfac e
3GP P Ou tp ut
Bu il di ng Bl oc k

3GPP
Notific ation BB

3G PP FM B B
( in cludes CS IRP)

Alc ate l-Lu c e nt S p e ci fi c


CORB A In te rfa ce

Alc ate l-L uc e n t Se c ur i ty BB


(in cl ud e s E n try Poin t IRP )

Au the n tic ate

W M S Und e r lyi ng
Sy ste m

WM S S e cu r ity
Se r vic e s
Fr am e wor k

Figure 9: 3GPP FM High Level architecture


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

4.5.

48

3GPP BASIC CM BUILDING BLOCK (3GPP BASICCM BB)

The 3GPP Basic CM BB is Alcatel-Lucents implementation of both the 3GPP Basic CM IRP and the
3GPP Kernel CM IRP Agents CORBA solution sets. It connects to the underlying OAM system
instances within a ROC; retrieves NE information, does the mediation, and provides Basic CM IRP
Manager(s) with 3GPP compliant network information.
The Basic CM part of this IRP Agent provides operations to the manager to retrieve all supported
Managed Objects and attributes.
The Kernel CM part, which is the main part of the Building Block, interacts with the Notification IRP
Agent to get the event channel reference dedicated to Kernel CM events. All ongoing events will be
sent to subscribe Kernel CM IRP Managers via this event channel by the Notification service.
Figure below illustrates Alcatel-Lucents 3GPP Basic CM / Kernel CM high level architecture.

3GPP Basic CM / Kernel CM


IRP Manager
3GPP standard
CORBA Interface

Alcatel-Lucent Security BB
(includes Entry Point IRP)

3GPP
Notification BB

Authenticate

3GPP Basic CM BB
(inc lude s Ker nel CM
IRP)

3GPP Output
Building Block

Alcatel-Lucent
Specific CORBA Interface

WM S S ec u r ity
S e r vic e s
Fr am e wor k

WM S Un d er ly in g
Sy ste m

Figure 10 : Basic CM/Kernel CM High Level architecture

4.6.

3GPP BULKCM BUILDING BLOCK (3GPP BULK CM BB)

The Bulk CM BB implements the Bulk CM IRP Agent CORBA interface defined by 3GPP standards.
The supported Bulk CM Managed Objects are within Alcatel-Lucents UMTS Terrestrial Radio Access
Network (UTRAN) domain.
The Bulk CM IRP Agent interacts with the Notification IRP Agent to get the event channel reference
dedicated to Bulk CM events. All applicable events will be sent to subscribe Bulk CM IRP Managers
via this event channel by the Notification service. The main operations available to IRP Managers are:
upload, download, activate and fallback. Also, IRP Agent uses XML configuration data files to
exchange data with the OSS client (IRP Manager).
Upload operations request the upload of configuration data to the OSS. XML files are sent describing
the Network Resource Model (NRM) of the Alcatel-Lucent Bulk supported 3GPP NRM.
For download requests, the OSS sends an XML configuration file to perform active configuration
management for Alcatel-Lucents Bulk supported 3GPP NRM.
Figure below illustrates Alcatel-Lucents 3GPP Bulk CM high level architecture.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

49

3GPP BulkCM
IRP Manager
3GPP standard
CORBA Interface

Alcatel-Lucent Security BB
(includes Entry Point IRP)

3GPP Output
Building Block

3GPP

3GPP
Notification BB

Bulk CM BB

Authenticate

Alcatel-Lucent Specific
CORBA Interface

W M S S e c ur ity
S er vi ce s
F ra m ewo rk

WM S U n d er ly in g
S ys te m

UTRAN
BTS

RNC

Figure 11 : Bulk CM High Level architecture

4.7.

3GPP PM BUILDING BLOCK (3GPP PM BB)

The PM BB implements the PM IRP Agent CORBA interface defined by 3GPP standards. This IRP
solution set supports Alcatel-Lucents UMTS Terrestrial Radio Access Network (UTRAN) domain.
Fundamentally, the PM BB is dependent on the Access Data Interface (ADI). The ADI is responsible
for collecting performance data, creating the proprietary performance data files, and notifying the PM
BB of the availability of these files. The PM BB, in turn, receives events from the underlying ADI
instance, mediates and translates the data from Alcatel-Lucents proprietary format to 3GPP standard
format, and sends notifications to PM IRP Manager via the Notification Service. 3GPP PM BB also
handles measurement job operation requests (e.g. creating, listing, and stopping measurement jobs)
from PM IRP Manager.
Figure below illustrates Alcatel-Lucents 3GPP PM high level architecture.
3GPP PM
IRP Manager
3GPP standard
CORBA Interface

Alcatel-Lucent Security BB
(include Entry Point IRP)

3GPP Output
Building Block

3GPP
Notification BB

3GPP PM BB

Authenticate

Alcatel-Lucent Specific
CORBA Interface

W -NM S S e c ur ity
Se r vic e s
Fr am e wor k

Access Data
In te r fac e (ADI)
(Pe r for m an ce S e rv er )

UTRAN
BTS

RNC

Figure 12 : PM High Level architecture


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

4.8.

50

3GPP BUILDING BLOCK DEPLOYMENT

Typically one 3GPP FM BB, one 3GPP BasicCM BB, one 3GPP Bulk CM BB, and one 3GPP PM BB
instance is deployed per ROC. All BBs co-reside with the primary WMS Main Server.
Figure below, shows a typical 3GPP Output Block configuration within a ROC.
3 GP P

IRP Manager
3GPP standard
CORBA Interface

3GPP FM BB
(includes CS IRP)

3GPP Basic CM BB
(includes Kernel CM
IRP)

3GPP Bulk
CM BB

3GPP PM BB

3GPP Output
Building Block

3GPP Notification BB
Main Server
(Primary)

Alcatel-Lucent SecurityBB
(includes Entry Point
IRP)

Alcatel-Lucent
Specific CORBA Interface

WMS Underlying
System

Main Server
(Secondary)

Figure 13 : 3GPP Output Building Block Deployment within a ROC

4.9. 3GPP EXTERNAL INTERFACE CAPACITY AND


PERFORMANCE
4.9.1 3GPP OUTPUT BUILDING BLOCK
3GPP Output Building block supports 100 user roles (previously known as user groups) per ROC.

4.9.2 3GPP FM BUILDING BLOCK EXTERNAL INTERFACE


The 3GPP FM BB is designed to handle concurrent Alarm IRP Managers requests; however, since the
alarm notifications are sent to the Alarm IRP Managers using the CORBA Notification Service,
performance of the Service would drop significantly with increased active subscriptions (notification
consumers). Consequently, the number of concurrent Alarm IRP Managers must be restricted when
considering the performance of the Notification Service; specifically, the notification rate. Therefore,
based on this factor, the 3GPP FM BB is limited to support (at most) two active users (subscriptions).
The 3GPP FM BB supports:

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

51

Maximum number of standing alarms in memory for 3GPP FM BB


The maximum number of standing alarms in memory per WMS Server
Maximum alarm burst rate for one (1) minute
In current solution, however, the actual alarm burst rate is equal to the
notification rate sustainable by the Notification Service (WMS solution
uses IONA Orbix ASP Notification Service to enable the Notification IRP
functionality)
Average sustained alarm arrival rate per number of IRP Managers with
Alarm filtering

Maximum number of subscriptions for 3GPP FM BB notifications

80,000 alarms
20,000 alarms
200 alarms/second
150
90
events/second events/second
for one user
for two users
120
alarms/second
with one (1)
IRP Manager

60
alarms/second
with two (2)
IRP Managers
2

Table 13: 3GPP FMBB Specifications


Engineering Rule: Support of Multiple-OSS Environment
A ROC can support up to 2 OSS for Fault Management.

4.9.3 3GPP BASIC CM (INCLUDING KERNEL CM IRP) BUILDING


BLOCK EXTERNAL INTERFACE
The 3GPP BasicCM BB supports:
Maximum number of managed objects or NEs
Maximum number of subscriptions for KernelCM notifications
subscriptions (at most) with an average notification arrival rate of one
(1) notification per second

10,000
2

Table 14 : 3GPP CM BB Specifications

4.9.4 3GPP BULK CM BUILDING BLOCK EXTERNAL INTERFACE


The 3GPP Bulk CM BB supports:
- A maximum of four (4) concurrent running Upload configuration sessions.
- Not more than one (1) running Download, Activate, or Fallback session.
- If an activate / fallback action is running, the OSS cant run an upload or activate/fallback action.
- If an upload action is running, the OSS cant run an activate / fallback action.
When an Upload, Download, Activate, or Fallback session is invoked, XML type files are transferred
(through FTP) between hosts. The size of these files will depend on the number of impacted nodes
and their respective configuration. Subsequently, session execution times are dictated by the size of
the files and available system/network resources (CPU/Bandwidth respectively).
Engineering Rule: Support of Multiple-OSS Environment
A ROC can support up to 2 OSS for Configuration Management.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

52

4.10. 3GPP PM BB EXTERNAL INTERFACE


The 3GPP PM BB can support notifications from a maximum of four (4) ADIs. The 3GPP interface is
supported through the OSS executing an ftp pull of XML files from the WMS Server.
The 3GPP PM interface is available on both the Primary and Secondary Main Servers in case of dual
server configuration in a ROC.
Engineering Rule: Support of Multiple-OSS Environment
A ROC can support up to 2 OSS for Performance Management.

4.11. OSS REMOTE LAUNCH OF WMS GUI


It is possible to launch, with NE in context, the Resource Browser as well the Alarm Manager from
within an OSS application. After the WMS GUI starts and the user is authenticated successfully, the
communication channel between the ALLP (Application Launch Listener Plug-in) and OSS Adapter is
set up. Then the launch request which contains facility name and facility argument (NE context) will be
sent from the OSS to ALLP through the communication channel. The ALLP will forward the request to
the Application Launch System. The Application Launch System will launch the corresponding facility
according to the request.

4.12. INTEGRATION OF WMS TO MS-PORTAL


MS-Portal (Multi-Standard Portal) is an Alcatel-Lucent network manager that provides customers with
a common application to manage 2G & 3G radio networks. MS-Portal ensures customer an integrated
multi standard environment without bringing constraints in customer networks. The two main
components of MS-Portal are MS-SUP (Multi-Standard Supervision) and NPO (see dedicated chapter
on NPO).
Integration of WMS to MS-Portal for Fault Management is done over the 3GPP standards CORBA
Interface. MS-Portal uses current 3GPP Alarm IRP to retrieve active alarms. The description of this
integration is mentioned in section 4.4.
Integration of WMS to MS-Portal for Configuration Management is done over a new MS-Portal IRP
over CORBA interface to the MS-Portal servers. The persistent IOR (Interoperable Object Reference)
of MS Portal IRP is retrieved on WMS primary main server through ftp, in a file created by MS-Portal.
MS-Portal subscribes to the MS-Portal IRP to subscribe for configuration changes. The MS-Portal IRP
can then push snapshots and events to MS Portal. Notification events (that is configuration changes,
state changes, object creation, object deletion) are sent by MS-Portal IRP to MS-Portal as work orders
in CM XML format.
The MS-Portal tool for Performance Management is NPO and integration of WMS to NPO is covered
in a dedicated chapter in this document.
Navigation from the MS-Portal to WMS GUI is done using the OSS remote launch in-context from the
MS-Portal GUI.

4.13. WMS EAST-WEST INTERFACE


WMS East-West (EW) interface allows the synchronization of external cells data, from the server
managing the cells to the server considering them as external cells. In OAM6.0, the EW interface is
defined between 2G and 3G WMS network management systems, and between 3G WMS network
management systems. Figure below gives the flow diagram to perform this synchronisation.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

53

WMS

WMS
CP_UMTS

CP_UMTS

Export snapshot
Activate WO
CMXML

CMXML
Synchronize Publish CSV

Aggregation
of cell info

3G-3G E/W
interface
Aggregation
of cell info

Synchronize

Csv
file

Csv
file

2G-3G E/W
interface
BTS 3G
OMC 2G

Csv
file

BTS 3G

BTS 2G

Figure 14 : WMS East-West Interface


Cell data information between WMS to WMS and WMS to 2G network manager is exported and
exchanged as CSV files retrieved through ftp. Daily cell information within the WMS server is exported
as CMXML snapshots in xml files.
On WMS, the cell data information is then aggregated from the external WMS or 2G manager with
local external cell info. In addition, synchronization workorder files in CMXML format can be generated
and can then be activated to propagate the cells synchronization to the WMS.
Engineering Note: East-West Interface
-

The default maximum number of supported external OAM is 12.


The synchronization with external OAM is performed on a daily basis.
The purge on folders /opt/nortel/tmp/eastwest/in/cells/ and /opt/nortel/tmp/eastwest/out/cells/ only
keeps 1 day of retrieved/exported files. Therefore, the maximum number of files that can be
present by default in each of these folders is 12.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

5.

54

WMS CLIENTS AND SERVER OF CLIENTS


ENGINEERING CONSIDERATIONS

This section focuses on the usage of WMS from an OAM user perspective and related client
engineering considerations (excluding OSS clients).
The following clients are in the scope of this section: WMS Clients (PC & UNIX), WMS Server of
Client.
For RFO, NPO and WQA clients, please refer to the individual chapters on this topic.

5.1.

WMS CLIENT CAPACITY

The number of concurrent WMS Clients (i.e. number of simultaneously active clients) supported by the
Main server is dependent on the number and type of servers in the ROC as per table below:
Main Server Type
SE M5000-8CPU
SF E4900-12 CPU, SF 4800-12 CPU
SE M5000-4CPU
SF E4900-8 CPU, SF 4800-8 CPU
NETRA T5440
SF V890, SF V880
SE T5220
SF V250, N240

Maximum number of
concurrent clients
70
50
40
30
20
5

Table 15 : Number of concurrent clients per Main Server type


In a ROC with Dual Main Server configuration, the number of concurrent users supported is the sum
of the individual Main Server supported clients. In the case of the Dual Main Server ROC, initial WMS
Client session set up is done via the Primary Main Server after which client communications are done
with all servers in the ROC.
Example For a ROC with 2 SF E4900-12 CPU Main Servers, the total supported number of WMS
concurrent clients is 2x50 = 100
The maximum numbers of registered users in a ROC (registration at the WMS Main Server NSP
application level) are given in the following table:
Main Server Type
SF E4900-12 CPU, M5000-8 CPU
SF 4800-12 CPU
SF E4900-8 CPU, M5000-4 CPU
SF 4800-8 CPU
SF V890, T5440
SF V880
SF V250, N240, T5220

Maximum number of
registered users
600
500
480
400
240
200
50

Table 16 : Number of Registered users per ROC

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

5.2.

55

WMS CLIENT USER PROFILE

Client usage can significantly impact performance of the main server, this section defines user profiles
in order to provide a baseline for overall client usage.
In order to determine relative overall performance impact of the entire client user population of a ROC
it is necessary to understand the relative tasks of each type of user.
NSP Level usage model
-

Groups are set up to have a maximum number of 201 NEs in a group (this means an RNC and all
of its associated NODE Bs)

As end to end network monitoring most likely to happen at OSS level, only 15% of users assumed
to monitor the entire networks. Others users are looking at one or a few groups at a time

Active Alarm manager are used against the opened groups. It is assumed that the total number of
alarm managers used is equal to 100% of the total number of WMS users supported with a
maximum of 5 alarm manager windows per user.

At any point in time, Historical Fault Browser (HFB) is used by about 50% of all main server users
(and assumed 1 instance of HFB per HFB user) and the overall query rate for this group of HFB
users averages to 1 query per active HFB user every 5 minutes.

On-Line Help used by 20% of users

UMTS Access specific usage model


-

Equipment Monitors: although the limit is 5 for the number of equipment monitors an individual
user can launch, It is assumed the average is 2 per user

UMTS Access NE Object Editor: 25% of users

Notification log tool:10% of users

OSI State reporting: 10% of users

SRS for NE specific patch download to main server, TIL and TMN GUIs (occasional only) 5% of
users.

Software Download to NEs (occasional) 5% of users

ROC system management usage model


-

It is anticipated that 5% of the total number of users are system administrators needing to use
tools such as SunMC, add users in LDAP directory, download patches by using SRS tools, etc.

5.3.

CLIENT ENGINEERING CONSIDERATIONS

There is no platform specific validation done during the WMS Client installation. However, it is
important that the minimum client hardware specification be met. Please note that there is a difference
in specifications for WMS Clients depending on if it is a ROC or NOC client, see Server Hardware
Specifications section.
A guideline of 1 GB of memory is used for ROC clients. There are many different GUIs in WMS and
under most usage scenarios; 1 GB should provide sufficient memory to avoid memory swapping which
could impact individual client performance.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

56

At the NOC, connectivity from the NOC WMS Client to multiple ROCs is possible, an increased
minimum memory requirement of 2 GB has been provided to support more GUIs being open on the
NOC client. This should ensure that the level of performance obtained on a single client station can be
closely matched on a NOC station connected to two different ROCs (with twice the number of
concurrent GUIs opened).
For usage for other OAM tools, please refer to the relevant section (e.g: NPO client consideration are
described in section "PC Hardware Requirements" of chapter "Network Performance Optimizer")
The difference between PC Based and Solaris based WMS Clients is that the Solaris based WMS
Clients includes OS Hardening & Network Time Protocol (for time synchronization).

5.4. WMS SERVER OF CLIENTS ENGINEERING


CONSIDERATIONS
The WMS Server of Clients provides a mechanism through Citrix software to run the WMS Client on a
dedicated machine and extend its display to any workstation (referred to as Citrix ICA Client). This
allows lower use of bandwidth over the client network and reduces the need to purchase multiple
hardware clients with nominal configuration (i.e. the customers can continue to use their existing
workstations if they are at a lower specification than nominal as long as they can support the Citrix
client).
At the Citrix client level a variety of OS systems can be supported (see Citrix documentation for
compatibilities). No other WMS software needs to be installed on the Citrix client (besides the Citrix
client software) and therefore the Citrix client can be used for other means also without complex coresidency issues.
The WMS Client is installed on the Server of Client along with the Citrix software. A thin Citrix Client is
installed on all client workstations.
When launching the WMS Client from the Citrix clients, the ICA protocol extends only the display
updates, keystrokes and mouse clicks to the Citrix Client while running the WMS Client applications
fully on the Server of Clients Server.
The Citrix Server of clients supports both a Solaris or Windows based Operating system environment.
Alcatel-Lucent offers the Citrix Server of Clients integration as a service and information on the
number of simultaneous clients supported is dependant on the type of hardware and OS used. Please
refer to your Alcatel-Lucent representative for more information.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

57

6.

HARDWARE SPECIFICATIONS

6.1.

OVERVIEW

This section provides hardware recommendations on the following server components in the WMS
OAM solution:
-

9353 WMS Main Server (Primary and Secondary)


9359 NPO
MS-Portal Server
o 9953 MS SUP
o 9959 MS NPO
Clients
WQA Server
Data Communication Network (DCN) equipment
Optional other equipments

5620 NM tool is covered in its dedicated section as the hardware is ordered separately.
Please contact your Alcatel-Lucent representative to order the equipment.
The hardware requirements listed in this section focus on the major requirements such as CPU,
memory and disk space. For complete hardware specifications, please refer to the following
documentation: Alcatel-Lucent 9353 WMS Architecture, Hardware Strategy and Requirements OAM06
The nominal hardware is RoHS compliant since July 2006. RoHS Directive (Restrictions of Hazardous
Substances Directive) bans 6 substances:
- Lead (solders, electrical/mechanical components)
- Hexavelent Chromium (corrosion resistant coating)
- Polybrominated biphemyls & Polybrominated diphenyl ethers (flame retardants for PCBs, plastics)
- Mercury & Cadmium
The following legacy hardware below is supported in OAM06 but not orderable:
-

SF V250
SF V880
SF E4800
SF E4900
SF V490
Sun StorEdge 6120 Array
Sun T3 Storage Array
SunBlade 150
SunBlade 1500
Multi Service Switch 8600 (Nortel)
Ethernet Switch 470 (Nortel)
Ethernet Switch 5510 (Nortel)

For more information on hardware specifications of these legacy hardware, please refer to the
following documentation: Alcatel-Lucent W-NMS Engineering Guide for OAM 5.1/5.2.

6.2.

SERVER HARDWARE SPECIFICATIONS

The following hardware specifications are based on capacity that needs to be supported for the
different network elements.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

58

6.2.1 SUN NETRA 240

SUN NETRA 240


Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU
RAM
Hard Disk
Ethernet board
Tape drive

2 x 1500 MHz UltraSPARC IIIi Processors (1 MB cache each)


8 GB
Internal 2 x 146 GB Disk Drives
SE3120 with 4 x 146GB Drives
2 Sun Dual Gigabit Ethernet 100/1000BaseT PCI Adapter
AC Power - 1x Sun StorEdge DAT 72 4mmTape Drive (36/72
GB)
DC Power - Pinnacle Data systems DS130 Tape Drive

Software
Operating
System

Sun Solaris 10 (05/08 and 10/08)

Table 17 : Sun N240 Hardware Requirements

6.2.2 SUN SPARC ENTERPRISE T5220


SUN SPARC ENTERPRISE T5220
Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU

1 * 1400 MHz UltraSPARC T2 8 core Processors

RAM
Hard Disk
Ethernet board

16 GB
8 x 146 GB Internal Disk Drives
2* Sun Quad Gigabit Ethernet 100/1000BaseT PCI Adapter

Tape drive

External LTO4 6 Half Height SAS rack-mount tape drive for


backup & restore,

Software
Operating
System

Sun Solaris 10 (05/08 and 10/08)

Table 18 : SUN SPARC ENTERPRISE T5220 Hardware Requirements


6

: Native capacity of LTO4 tape cartridge is 800 GB with an IO speed of 120 MB per second

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

59

6.2.3 SUN FIRE V890

SUN FIRE V890


Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU
RAM
Hard Disk
Ethernet board
Tape drive
Software
Operating
System

4 x 1800MHz UltraSPARC IV+ Processors with 32MB external


cache each
16 GB
12 x 146 GB Disk Drives
2 Sun Quad Gigabit Ethernet 100/1000BaseT Ethernet PCI
Adapters
1x Internal DAT 72 Tape Drive
Sun Solaris 10 (05/08 and 10/08)

Table 19 : Sun V890 Hardware Requirements

6.2.4 SUN NETRA T5440


NETRA T5440 machine is NEBS compliant and is available in AC or DC power.

SUN NETRA T5440


Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU
RAM
Hard Disk

UMT/OAM/APP/024291

2 * 1.4 GHz UltraSPARC T2 Plus 8 core processors


32 GB
12*146GB SAS internal disk drive

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

60

Ethernet board

2 Sun Quad Gigabit Ethernet 100/1000BaseT PCI Adapter

Tape drive
DAT 72 is recommended with T5440 to address full NEBS
requirements.
SL24 LTO4 is recommended for automatic cartridges
management and if the full NEBS requirement is not required.
(External SL24 LTO4 7 HH SAS rack-mounted tape autoloader
for backup & restore, 24 LTO tape cartridge slots)
Software
Operating
System

Sun Solaris 10 (05/08 and 10/08)

Table 20 : SUN NETRA T5440 Hardware Requirements

6.2.5 SUN FIRE E4900

SUN FIRE E4900


Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Hardware
CPU
RAM
Hard Disk
Ethernet board
Tape drive
Software
Operating
System
Extension
CPU, RAM

2 CPU/Memory Boards with 4 UltraSPARC IV+ 1800MHz


CPUs, 16MB external cache each
32 GB
4 x 146 GB Hard Disks
2 ST6140 Rack, with 16 x 146GB Drives each
2 Quad GigaSwift Ethernet 100/1000BaseT cards
2 DAT 72 Tape Drive
Sun Solaris 10 (05/08 and 10/08)

1 CPU/Memory Board with 4 UltraSPARC IV+ 1800MHz,


16MB external cache each
16 GB RAM

Table 21 : SF E4900 Hardware Requirements


Hardware
Standard
Equipment

Sun StorEdge 6140 Array


16 x 146GB drives
2 dual Ethernet 10/100 base-T cards

Table 22 : Sun StorEdge 6140 Hardware Requirements

: Native capacity of LTO4 tape cartridge is 800 GB with an IO speed of 120 MB per second

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

61

6.2.6 SUN SPARC ENTERPRISE M5000


SUN SPARC ENTERPRISE M5000
Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU
RAM
Hard Disk

Ethernet board
Tape drive

1 CPU/Memory Boards with 4 quad-core SPARC64 VII


processor 2400MHz CPUs, 16MB external cache each
32 GB
4* 146GB (10K-RPM 2.5 SAS) internal disk
1* ST2540, (12*300GB each) external disk controller tray
(including 2 dual Ethernet 10/100 Base-T cards)
1*ST2501 external disk expansion tray (12*300GB)
2*Sun Quad Gigabit Ethernet (copper) 100/1000BaseT PCIe
Adapter
External SL24 LTO4 HH SAS rack-mounted tape autoloader
for backup & restore, 24 LTO tape cartridge slots

Software
Operating
System
Extension Hardware
CPU, RAM

Sun Solaris 10 (05/08 and 10/08)

1 CPU/Memory Boards with 4 quad-core SPARC64 VII


processor 2400MHz CPUs, 16MB external cache each
32 GB RAM

Table 23 : SUN ENTERPRISE M5000 Hardware Requirements

6.2.7 SUN FIRE V490

SUN FIRE V490


Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU
RAM
Hard Disk

Ethernet board
Tape drive
UMT/OAM/APP/024291

2 x 1800 MHz UltraSPARC IV+ Processors with 2Mbytes


level2, 32MB leval 3 cache
16 GB
2 x 146 GB Disk Drives
SE3510 Rack Ready-2RU, 12*146GB 1500 RPM FC drives, 2
FC RAID controller, 2*1GB cache, 2*AC Power supplies
2 Sun Quad Gigabit Ethernet (copper) 100/1000BaseT PCI
Adapter
External Sun StorEdge[TM] SDLT 600 Rack Mount tape drive,
01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

(Optional)

Software
Operating
System
Extension
CPU, RAM

62

LVD SCSI. (300GB capacity). It contains one tape drive and


can support another tape drive
(Another tape drive option for SDLT 600 can be ordered. The
Tape Cartridge for SDLT 600 can also be ordered)
Sun Solaris 10 (05/08 and 10/08)

2 x 1800 MHz UltraSPARC iV + Processors with 2Mbytes


level2, 32MB leval 3 cache
16 GB RAM

Table 24 : SF V490 Hardware Requirements

6.2.8 SUN SPARC ENTERPRISE M4000

SUN SPARC ENTERPRISE M4000


Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU

RAM
Hard Disk

2 * 2500 MHz quad core SPARC 64 VII


Processors
32 GB
2 * 146GB 10K-RPM 2.5 SAS internal disk

Ethernet board

ST2540 Rack Ready external disk controller tray, 12*300-GB


(including 2 dual Ethernet 10/100 Base-T cards)
2 * Sun Quad Gigabit Ethernet 100/1000BaseT PCI Adapter

Tape drive
(Optional)

External LTO4 8 HH SAS rack-mounted tape drive for backup


& restore

Software
Operating
System
Extension
CPU, RAM

Hard Disk

Sun Solaris 10 (05/08 and 10/08)

1 CPU/Memory Boards with 2 * 2500 MHz quad core SPARC


64 VII Processors
(32 GB of RAM added accordingly)
ST2501 external disk expansion tray (12*300-GB)

Table 25 : SUN SPARC ENTERPRISE M4000

: Native capacity of LTO4 tape cartridge is 800 GB with an IO speed of 120 MB per second

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

63

For more details on the tape drive equipments proposed and the compatibility with server and domain,
please refer to backup and restore section 3.7.2.

6.3.

CLIENTS HARDWARE SPECIFICATIONS

WMS Clients are supported on both Windows based PC Clients and Solaris based Sun Client
workstations.
Engineering Note: Purchasing
PC Clients are not orderable through Alcatel-Lucent. It is left to the customer to purchase PC Clients
(including all software associated to the client like OS, X-Display, etc) from their preferred supplier
thanks to the hardware recommendation described in this section.
Since SUN Microsystems retires the Sparc Processor based Workstations, Ultra 45 Unix Workstation
are no more orderable. However the Sun Ultra 45 described below in Table 26, and already deployed
on site are still supported.

6.3.1 SUN CLIENT WORKSTATION


Following are the specification for the UNIX client supported for WMS.

SUN ULTRA 45
Product Classification
9353 WMS Server
9359 NPO Server
9953 MS-SUP Server
9959 MS-NPO Server
Base Hardware
CPU
RAM
Hard Disk
Ethernet board
Software
Operating
System

1 UltraSparc IIIi 1600MHz


2 GB
1 x 250 GB
10/100/1000 Mb/sec
Sun Solaris 10 (05/08 and 10/08)

Table 26 : Sun Ultra 45 Hardware Requirements


Engineering Rule: Unix Client
UNIX Client workstation is applicable to WMS Application only. It is not applicable to the other clients
application : NPO, MS PORTAL, WQA, RFO, SDA and WPS

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

64

6.3.2 WINDOWS BASED PC CLIENTS


6.3.2.1

WMS CLIENT

Engineering Rule: Windows PC requirements for WMS application


Engineering Rule 1: The legacy Windows PC requirements for WMS are defined in the table 19
below. This generic model is applicable for the usage of WMS application only within ROC
architecture.
Engineering Rule 2: Legacy PC client requirement of 512 MB RAM is no longer supported in OAM
06. (to support this RAM, it is recommended to use Citrix Server of Client, please refer to section 5.4)

Windows PC client Hardware Requirements for WMS


Base Hardware
CPU
RAM
Hard Disk
Ethernet board
Software
Operating
System

1 CPU Pentium 4, 2.8 GHz (512 KB cache) or


1 CPU core 2 duo
1 GB or higher
40 GB disk or higher
100/1000 Mb/sec Ethernet boards
Windows Vista Business/Enterprise Service Pack 1
Windows XP Professional Service Pack 2/3
Windows 2000 Professional Service Pack 4
(With Windows Vista it is highly recommended to add 1
GB of RAM to the above requirements)

X Display

Hummingbird Exceed 2008 9 or Cygwin 1.5 or greater

Table 27 : Windows PC Hardware Requirements for WMS


Engineering Recommendation: Optimal ROC WMS Client Performance
For optimal performance for a WMS client used within ROC architecture, it is recommended to use
a 1 CPU Core 2 duo, 2 GB RAM and 80 GB hard disk configuration.

Engineering Rule: NOC WMS Client


For WMS client used within NOC architecture, it is mandatory to use a PC Client with at least 2 GB
RAM. 3GB RAM or higher is highly recommended for large network.

: Alcatel-Lucent does not support this software

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

6.3.2.2

65

NPO & MS-PORTAL CLIENT

Engineering Rule: Windows PC requirements for NPO, MS-NPO / MS-SUP application


The legacy Windows PC requirements for NPO, MS-NPO / MS-SUP application are defined in the
table below.

Windows PC client Hardware Requirements for NPO & MS-PORTAL


Base Hardware
CPU
RAM
Hard Disk
Ethernet board
Software
Operating
System
Other
Applications

1 CPU Pentium 4, 2.8 GHz (512 KB cache) or


1 CPU core 2 duo or Core 2 Quad or higher
2 GB or higher
40 GB disk or higher
(At least 1 GB free minimum in C Partition)
100/1000 Mb/sec Ethernet boards
Windows XP (English or French) Service Pack 2 or 4
Microsoft Office 2003
Java 1.5 (JRE1.5.0_06 and JDK1.5.0_06) must also be
installed on the MS-NPO Client.

Table 28 : Windows PC Hardware Requirements for MS-PORTAL


Engineering Recommendations: MS-PORTAL HMI Server
Recommendation 1: For MS-PORTAL Client usage on PCs with less than 2 GB RAM, it is
recommended to use the Citrix Server of Clients (also known as HMI server).
Recommendation 2: For MS- PORTAL client used through a VPN connection (e.g.: Alcatel Lucent
Brick Gateway), it is recommended to use the Citrix Server of Clients (also known as HMI server).
Recommendation 3: If the bandwidth between the OAM clients sub network (MS- PORTAL client)
and OAM servers sub network (MS- PORTAL server) is less that 1024 Kbits/s (per client), it is
recommended to use the Citrix Server of Clients (also known as HMI server).
Please refer to section 12.5 for more information.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

6.3.2.3

66

WPS CLIENT

Engineering Rule: Windows PC requirements for WPS application


Engineering Rule 1: For a Medium Network (700 NODE B) or Large Network (2000 NODE B), it is
mandatory to use a PC Client with at least 2 GB RAM or higher and Core 2 Duo CPU.
Engineering Rule 2: For an X-large Network (4000 NODE B), it is mandatory to use a PC Client with
at least 4 GB RAM or higher and Core 2 Duo CPU.
WPS client should not be used simultaneously with any other client applications. Otherwise it is
highly recommended to follow RAM specification described in Client Simultaneous usage section.
(See section 6.3.2.6).
The generic model descried in WMS table above can be used for other hardware characteristics.

If you get some memory troubles with WPS managing a large network; it is possible to increase the
RAM used by WPS. Please call the Alcatel-Lucent support team.

6.3.2.4

WQA CLIENT

Engineering Rule: Windows PC requirements for WQA application


Engineering Rule 1: The supported client platform is a PC running a Windows Operating system
which supports Windows Internet Explorer 6.x. Optimal performance will be obtained if there are
sufficient resources on the client to support WQA and Excel (main requirement:100 MB free should be
available before starting up WQA and Excel).
Engineering Rule 2: 1 GB of RAM has to be considered for the usage of WQA application. Such as in
case of simultaneous usage with any other client applications (e.g.: WMS), it is highly recommended
to follow RAM specification described in Client Simultaneous usage section. (See section 6.3.2.6).
With regards to hardware characteristics, the WMS generic model descried in section 6.3.2.1 can be
used.
With regards to OS software supported with WQA client: Windows XP is supported in OAM 6.0.
(Windows Vista is not supported in OAM6.0)

6.3.2.5

RFO CLIENT

For RFO Client specifications please refer to section 14.

6.3.2.6

CLIENT SIMULTANEOUS USAGE

The following table described the minimum RAM requirements for PC running simultaneously WMS
application with another client application.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

67

WMS & NPO

WMS & WPS

WMS & WQA

WMS & RFO

Medium Network

3GB

3GB

2G

2G

Large Network

4GB

5GB

2G

3G

Usage

Table 29: RAM requirements for client simultaneous usage

6.4.

WQA SERVER HARDWARE SPECIFICATIONS


Windows PC Server requirements for WQA
Base Hardware
CPUs
RAM
Hard Disk
Ethernet Board
Software
Operating
System

2 CPU
8 GB
600 GB (Please see Engineering Notes below)
100/1000 Mb/sec Ethernet boards
Microsoft Windows Server 2003 Enterprise Edition 32 bits
Service Pack 2

Table 30 : WQA Hardware Specifications


Engineering Note: WQA Server
Engineering Note 1 - The amount of disk space mentioned above is for the WQA application itself.
Because of RAID, the sum of the size of the individual disks will be higher than this. Having a spare on
site disk should also be considered. A fast disk i/o subsystem is recommended given WQA
manipulates large amount of data in its disks. As such the server should be used on 10k RPM disks
and a hardware based RAID controller. RAID 5 recommended.
Engineering Note 2 - The native windows firewall is not supported.
Engineering Note 3 WQA server runs on Windows Server 2003 32-bit Operating System. (64 bits
OS is not supported)
Engineering Note 4 The windows server hardware is not orderable through Alcatel-Lucent. It is left
to the customer to purchase the server from their preferred supplier
Engineering Note 5 - Windows Server 2003 is no longer orderable from Microsoft. The ordering of the
WQA server hardware & Operating System is the responsibility of the Project team or of the customer;
It will be required by them to obtain a Windows 2008 Enterprise Edition license key and either request
the hardware provider to downgrade to Windows Server 2003 Enterprise Edition or contact Microsoft
license support directly on the following numbers in the link:
http://www.microsoft.com/licensing/resources/vol/numbers.mspx
Engineering Note 6 - 600 GB of disk space is reserved for basic product configuration i.e. kernel + 1
feature (like support of CTn) on the server. For additional features, like support of CFT or CTx, it is
necessary to allocate additional space of 250 GB per feature.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

6.5.

68

GENERAL AVAILABILITY OF SUN SERVERS

Hardware Serviceability
To reduce Mean Time to Repair (MTTR), the following strategies can be employed:
- Spare FRU's at the customer sites,
- A Sun Spectrum Platinum contract: 7x24 on site support, 7x24 phone support, and 2 hour
response time
- Protected server environment: rooms with access control and air-conditioning, protected power
supplies (un-interruptible power supply (UPS) shall be used to ensure high availability), etc.
Hardware reliability
Reliability of Sun servers and its sub-components is excellent such that the MTBF (Mean Time
Between Failure) is minimized. Most outages on customer sites are typically due to software issues,
human errors, or acts of God.
Environmental monitoring
The Sun Server's environmental monitoring and control system continuously monitors temperatures at
several critical locations throughout the machine. Readings from thermal sensors provide input to the
server's airflow management system, which automatically adjusts fan speeds as necessary to keep
components operating temperatures within acceptable ranges.
If measured temperatures exceed safe operating limits and fans are already operating at maximum
speed, the system automatically notifies the console and suspends operation.

6.5.1 SUN SERVER HARDWARE REDUNDANCY STRATEGY


The SUN Servers used for the WMS Main and MS-NPO, provide hardware redundancy within the
servers using internal redundancy, environment monitoring and other features discussed in this
section.

6.5.1.1

SF 4X00

The architecture of the Sun Fire 4x00 server family is built around the redundant re-configurable Sun
Fireplane Interconnect.
Hardware Redundancy
Sun Fire 4x00 servers provide full hardware redundancy.
Should any key component fail (whether it is a CPU, memory, system controller, power supply, cooling
unit, interconnect or system clock) the system is able to recover, and in many cases continue to run
uninterrupted.
Full hardware redundancy includes the following components:
-

Redundant CPUs
Memory
Memory boards
I/O assemblies
I/O adapters (if configured)
Redundant system controllers
Redundant system clock with automatic fail-over
Redundant Sun Fireplane switches
Redundant AC power sources, facilitated by the Redundant Transfer Switch
Redundant power supplies and intelligent power switching mechanism that will fail-over to
remaining power modules

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

69

Hot Plug-able Components and Dynamic Reconfiguration


Hot plug allows for the removal of failed components and insertion of replacements without first
powering down the system.
Dynamic Reconfiguration (DR) - Part of the Solaris 8 Operating Environment, DR enables you to
dynamically reconfigure, remove or install core system components into your server while the Solaris
OE and your applications are running.
With Dynamic Reconfiguration, you can perform the following functions:
-

Configure CPU/Memory boards into a running domain


Install new CPU/Memory boards in a domain
Hot-Swap CPU/Memory boards
Hot-Swap an I/O Assembly
Hot-Swap a PCI Card
Move a CPU/Memory board between Dynamic System Domains
Initiate parallel DR operations

Automatic System Recovery (ASR)


The SUN Fire 4x00 provides automatic system recovery from the following types of hardware faults:
-

CPU modules
Memory modules
PCI buses
System IO interfaces

The automatic system recovery allows the system to resume operation after experiencing certain
hardware failures. The automatic self-test feature enables the system to detect failed hardware
components. An auto-configuration capability designed into the system's boot firmware allows the
system to de-configure failed components and restore system operation.
Redundant Power Distribution System
Three power supplies provide the power. These modular hot swap units are installed and removed
from the rear of the system, even while the machine is fully operational. Maximally configured systems
operate continuously with two power supplies installed.
Power supplies feed all active system components through a common power distribution bus. Power is
drawn equally from all supplies installed in the system. If the service from one power supply is
interrupted, the system power demand shifts automatically to the remaining active supplies. If the
combined output of the remaining supplies satisfies the system's requirements, the machine continues
to operate with no interruption of service.
With a hot standby, or "n+1" power supply installed, the system can continue operating while a
replacement power supply is being installed. If power demands exceed the output of the active
supplies, the system automatically notifies the console and suspends operation.

6.5.1.2

SF V8X0

The Sun Fire V8x0 Server is a high-performance, reliable server.


CPU is added in pairs via a dual processor/memory module. All memory is accessible by any
processor, as workgroup servers do not implement domains or partitions.
An internal storage array supports twelve Fiber Channel disks.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

70

Hardware Redundancy
This powerful server incorporates many key RAS features such as Automatic System Recovery
(ASR), multi-pathing support to the storage subsystems and networks, hot-swap power supplies,
cooling fans, internal disks and PCI slots. All systems are configured with redundant (N+1) power
supplies and a redundant set of cooling fan trays.
Automatic System Recovery (ASR)
The SUN Fire V8x0 provides automatic system recovery from the following types of hardware faults:
-

CPU modules
Memory modules
PCI buses
System IO interfaces

The automatic system recovery allows the system to resume operation after experiencing certain
hardware failures. The automatic self-test feature enables the system to detect failed hardware
components. An auto-configuration capability designed into the system's boot firmware allows the
system to de-configure failed components and restore system operation.
Redundant Power Distribution System
Three power supplies provide the power. These modular hot swap units are installed and removed
from the rear of the system, even while the machine is fully operational. Maximally configured systems
operate continuously with two power supplies installed.
Power supplies feed all active system components through a common power distribution bus. Power is
drawn equally from all supplies installed in the system. If the service from one power supply is
interrupted, the system power demand shifts automatically to the remaining active supplies. If the
combined output of the remaining supplies satisfies the system's requirements, the machine continues
to operate with no interruption of service. With a hot standby, or "n+1" power supply installed, the
system can continue operating while a replacement power supply is being installed. If power demands
exceed the output of the active supplies, the system automatically notifies the console and suspends
operation.

6.5.1.3

NETRA 240

Netra 240 is NEBS Level 3 certified. The Sun Netra 240 Server is a high-performance, reliable server
for enterprise network computing based upon the UltraSPARC IIIi microprocessor technology. All
memory is accessible by any processor as workgroup servers do not implement domains or partitions.
An internal storage array supports 2 Ultra 160 SCSI disks.
Hardware Redundancy
This powerful server incorporates many key RAS features such multi-pathing support to the storage
subsystems and networks, hot-swap power supplies, cooling fans and internal disks. All systems are
configured with redundant (1+1) power supplies and a redundant set of cooling fan trays.

6.5.1.4

SUN SPARC ENTERPRISE T5220

Sun Enterprise T5220 servers incorporate the following key features to increase RAS:
-

Redundancy and hot-swap components


Reduced parts count that contributes to better overall stability and reliability of the platform
Processors thread and core off-lining and built-in RAID capabilities

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

71

Parity protection and error correction capabilities


Integrated Lights Out Management (ILOM) service processor to ease remote management and
provide considerable administrative flexibility
Superior energy efficiency
Robust virtualization technology
Comprehensive fault management

Hot Plug-able Components and Dynamic Reconfiguration


Sun Enterprise T5220 servers support hot-plug of chassis mounted hard drives, and hot-swap of
redundant fan units and power supplies. For systems configured with redundant components,
administrators can utilize software commands to remove and replace disks, power supplies, and fan
units while the system continues to operate. T5220 also supports RAID capabilities.
Power Supply Redundancy
The system features two hot-swappable power supplies, either of which is capable of handling the
systems entire load. Thus, the system provides N+1 redundancy, allowing the system to continue
operating should one of the power supplies or its AC power source fail.
Integrated Lights Out Management for Simplified Remote Serviceability
The Integrated Lights Out Management (ILOM) service processor is a system controller built into all
T5220 servers, facilitating remote system management, simplifying administration, and speeding
maintenance tasks.
The ILOM circuitry runs independent of the server, using the servers standby power. Therefore, ILOM
firmware and software continue to function when the server operating system goes offline or when the
server is powered off. ILOM monitors the following T5220 server conditions:
-

CPU temperature conditions


Hard drive status
Enclosure thermal conditions
Fan speed and status
Power supply status
Voltage conditions
Solaris watchdog, boot time-outs and automatic server restart events

ILOM provides administrators with the capability to monitor and control T5220 servers over a
dedicated Ethernet connection and supports secure shell (SSH), Web, and Integrated Platform
Management Interface (IPMI) access. ILOM functions can also be accessed through a dedicated
serial port for connection to a terminal or terminal server.

6.5.1.5

SUN SPARC ENTERPRISE T5440

The Sun Enterprise T5440 servers incorporate the following key features to increase RAS:
- Lower heat generation, reducing hardware failures
- Hot-pluggable hard drives
- Redundant, hot-swappable power supplies (four)
- Redundant N+1 hot-swappable fan modules
- Environmental monitoring
- Internal hardware drive mirroring (RAID 1)
- Error detection and correction for improved data integrity
- Easy access for most component replacements
Hot-Pluggable and Hot-Swappable Components
Sun Enterprise T5440 server hardware is designed to support hot-plugging of the chassis-mounted
hard drives, and hot-swapping of fan units, power supplies.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

72

Power Supply Redundancy


The T5440 server provides four hot-swappable power supplies, enabling the system to continue
operating should one of the power supplies fail or if a power source fails.
Environmental Monitoring
The T5440 server features an environmental monitoring subsystem that protects the server and its
components against:
- Extreme temperatures
- Lack of adequate airflow through the system
- Power supply failures
- Hardware faults
Remote Manageability with ILOM
The Integrated Lights Out Manager (ILOM) feature is a system controller, built into the server, that
enables you to remotely manage and administer the server. The ILOM software is preinstalled as
firmware, and initializes as soon as you apply power to the system.
ILOM enables you to monitor and control your server over an Ethernet connection (supports SSH), or
by using a dedicated serial port for connection to a terminal or terminal server. ILOM provides a
command-line interface and a browser-based interface that you can use to remotely administer
geographically distributed or physically inaccessible machines. In addition ILOM enables you to run
diagnostics (such as POST) remotely that would otherwise require physical proximity to the servers
serial port.
You can configure ILOM to send email alerts of hardware failures and warnings, and other events
related to the server. The ILOM circuitry runs independently of the server, using the servers standby
power. Therefore, ILOM firmware and software continue to function when the server operating system
goes offline or when the server is powered off.
ILOM monitors the following SPARC Enterprise T5440 server conditions:
- CPU temperature conditions
- Hard drive status
- Enclosure thermal conditions
- Fan speed and status
- Power supply status
- Voltage conditions
- Solaris watchdog, boot time-outs and automatic server restart events

6.5.1.6

SUN SPARC M4000/M5000

To deliver reliability, availability and serviceability, the Sun Enterprise M4000/M5000 offers the
following features:
-

Supports redundant configurations and active replacement of power supplies and fans.
Periodically performs memory patrol to detect memory software errors and stuck-at faults,
(Memory patrol).
Supports redundant configurations, mirroring, and active replacement of disks.
XSCF (detailed below) collection of fault information, and preventive maintenance using different
types of warnings.
Shortens the downtime by using automatic system reboot and time taken for system start-up.
Status LEDs mounted on the main components and the operator panel to display which active
components need replacement
Centralized systematic monitoring, such as with SNMP

Power Supply Redundancy


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

73

The system features two/four (M4000/M5000) hot-swappable power supplies, either of which is
capable of handling the systems entire load. Thus, the system provides N+1 redundancy, allowing the
system to continue operating should one of the power supplies or its AC power source fail.
eXtended System Control Facility Unit (XSCFU)
The eXtended System Control Facility Unit (XSCFU) is a service processor that operates and
administrates the M4000/M5000. The XSCFU diagnoses and starts the entire server, configures
domains, offers dynamic reconfiguration, as well as detects and notifies various failures. The XSCFU
enables standard control and monitoring function through network. Using this function enables starts,
settings, and operation managements of the server from remote locations
The XSCFU uses the eXtended System Control Facility (XSCF) firmware to provide the following
functions:
- Controls and monitors the main unit hardware
- Monitors the Solaris Operating System (Solaris OS), power-on self-test (POST), and the
OpenBoot PROM
- Controls and manages the interface for the system administrator (such as a terminal console)
- Administrators device information
- Controls remote messaging of various events
The XSCF firmware provides the system control and monitoring interfaces listed below:
- Serial port through which the command-line interface (XSCF shell) can be used
- Two LAN ports:
o XSCF shell
o XSCF Web (browser-based user interface)

6.5.1.7

SUN STOREDGE 6140 ARRAY

The ST6140 is a stackable fail-safe storage array with 16x146 disks 15k RPM. The configuration
supporting the OAM solution is a dual ST6140:
- 1 controller tray
o 16 x 146GB 4Gps Fibre Channel disks, 15rpm
o 2 redundant FC RAID Channel disks, 15Krpm
o 2 redundant FC connections to the server
o 2 redundant 100 Base-T Ethernet coonections, 2 IP addresses
o 2 redundant AC power supplies
o 2 redundant cooling fans
- 1 expansion tray
o 16 x 146 BB 4 Gps Fibre Channel disks, 15 Krpm
o 2 redundant I/O modules, connection to the controller tray
o 2 redundant AC power supplies
o 2 redundant cooling fans

6.5.1.8

SF V490

To deliver high levels of reliability, availability and serviceability, the Sun Fire V490 system offers the
following features:
-

Hot-pluggable disk drives


Redundant, hot-swappable power supplies
Environmental monitoring and fault detection
Automatic system recovery (ASR) capabilities
Multiplexed I/O (MPxIO)
Dual-loop enabled FC-AL subsystem
Support for disk and network multipathing with automatic failover capability

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

74

Hot-Pluggable and Hot-Swappable Components


In a Sun Fire V490 system, the FC-AL disk drives are hot-pluggable components and the power
supplies are hot-swappable.
The system centre plane provides slots for up to two CPU/Memory boards. Each CPU/Memory board
incorporates two processors with static random access memory (SRAM) external cache memory and
slots for up to 16 memory modules.

Engineering Note: Sun Fire V490


CPU/Memory boards on a Sun Fire V490 system are not hot-pluggable

Automatic System Recovery


ASR on the Sun Fire V490 server provides for automatic fault isolation and restoration of the operating
system following non-fatal faults or failures of these hardware components:
- Processors
- Memory modules
- PCI buses and cards
- FC-AL subsystem
- Ethernet interface
- USB interfaces
- Serial interface
In the event of such a hardware failure, firmware-based diagnostic tests isolate the problem and mark
the device as either failed or disabled. The OpenBoot firmware then deconfigures the failed device
and reboots the operating system. This all occurs automatically, as long as the Sun Fire V490 system
is capable of functioning without the failed component.
Once restored, the operating system will not attempt to access any deconfigured device. This prevents
a faulty hardware component from keeping the entire system down or causing the system to crash
repeatedly.
Power Supply Redundancy
The system features two hot-swappable power supplies, either of which is capable of handling the
systems entire load. Thus, the system provides N+1 redundancy, allowing the system to continue
operating should one of the power supplies or its AC power source fail.
Environmental Monitoring
The Sun Fire V490 system features an environmental monitoring subsystem designed to protect
against:
- Extreme temperatures
- Lack of adequate airflow through the system
- Power supply failures
Monitoring and control capabilities reside at the operating system level as well as in the systems Boot
PROM firmware. Temperature sensors are located throughout the system to monitor the ambient
temperature of the system and the temperature of several application-specific integrated circuits
(ASICs). This ensures that monitoring capabilities remain operational even if the system has halted or
is unable to boot.

6.5.2 STANDBY SERVERS STRATEGY


The WMS Servers have built in redundancy. This section describes a server standby strategy.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

75

Standby servers (also called cold standby servers) can be made available in the event of a failure of a
WMS Primary / Secondary Main Server. In the event of a failure of one of these servers, operators
can manually switch over from the failed server to the standby server.
This strategy can be implemented according to the following two scenarios:
Scenario 1: If only 1 standby server is available
- The standby server must be initially installed and configured as a WMS Primary Main Server.
- In the event of a failure of a WMS Primary Main Server, the standby server can be manually
activated and becomes the WMS Primary Main Server.
Scenario 2: If more than one standby server is available
- The standby servers must be initially installed and configured as the servers that they are standby
for.
- In the event of a failure of any of the supported servers the associated standby server can be
manually activated to replace the faulty server.
Once the standby server is activated, the operator must restore the last data backup that was
performed on the failed server. Please refer to the Wireless Management System Backup and Restore
User Guide, for more information.
The standby server hardware must have the same hardware configuration as the original server.
Thus, if there are different types of servers deployed, there must be one standby server of each type
available to have a backup plan for each of them.
Disaster recovery (or emergency Recovery) is provided as an enhanced service by Alcatel-Lucent.
The customer should contact their local Alcatel-Lucent representative for more information.

6.6.

DCN HARDWARE SPECIFICATIONS

This section provides hardware recommendations for the various DCN components that are part of the
OAM reference architecture as well as recommendations that will help engineer the OAM DCN.
Hardware specifications and recommendations are provided for:
-

Firewalls
LAN/WAN equipment of the OAM reference architecture

6.6.1 FIREWALL EQUIPMENT SPECIFICATIONS


As part of the OAM reference architecture, firewalls are recommended to increase security of the OAM
network.
The use of firewalls is strongly recommended:
- Between any OAM server network and other non-OAM networks (i.e. Client Network, NE Network,
and other non-OAM networks).
- For communications that span multiple sites (i.e. From OAM server network to remote ROC).
For WMS, the recommended firewalls solutions include:
-

Brick VPN Firewall

Alcatel-Lucents VPN Firewall portfolio offers a broad range of carrier-class platform for delivering
advanced security, VPN, bandwidth management, and other high-demand IP services.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

76

VPN Firewall Brick platforms


VPN Firewall Brick platforms offer integrated security, VPN, and bandwidth management functionality
in one cost-effective configuration. They deliver bullet-proof security and comprehensive, highperformance VPN capabilities for enterprise environments ranging from small offices to large data
centres.
For complete technical specifications and other product literature on the VPN Firewall Brick platforms,
please refer to: http://www1.alcatel-lucent.com/products/index.jsp?subNumber=&letter=V&firstDoc=20

Model

Throughput (Gbps)

Specifications
Session connections per second

VPN Brick 150

0.33

20 000

Total concurrent
sessions
245 000

Table 31 : VPN Firewall Brick Platform

6.6.2 VPN ROUTER


VPN Firewall Brick platforms serve several roles in enterprise and carrier IP networks: basic IP access
router, dedicated VPN switch, or firewall.

Model
Memory
Interfaces
Encryption
Number of
tunnels
Software

Specifications
VPN Brick 150
128 MB (base)
4 Built-in 10/100 Ethernet LAN ports (standard)
128 bit encryption
License allows up to 1000 simultaneous tunnels
ALSMS V9.1 Package

Table 32 : VPN Router Platform


As part of product support to customers, it is required by support teams to be able to VPN into the
Alcatel-Lucent solution remotely to troubleshoot and resolve issues within the committed service level
agreements. This support VPN access is provided as part of the RAMSES (Remote Advanced
Monitoring System for engineering Support) VPN solution which is mandatory on each customer site.
The following diagram provides an architectural view of the RAMSES solution.

PC Linux Mediation
Device

ALU WMS and


Network Elements

Public IP Network
Netscreen Gate
NS-5GT
Alcatel-Lucent secure premises
with authorised users and
servers

Customer premisis

Figure 15 : RAMSES Solution Architectural Diagram

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

77

The monitored Customer Network is accessed via the public IP network from Alcatel-Lucents secure
premises. The Customer Network and the RAMSES data flow are protected versus Internet using a
dedicated gateway (Firewall) denying any flow except an IPSec tunnel from a central gateway in
Alcatel-Lucent premises.
The IPSEC Tunnel characteristics are:
-

Single Tunnel between 2 Netscreen firewalls


Managed by Alcatel-Lucent as per security rules.
Technical features:
o SHA-1 for source authentication and data integrity
o 3-DES for confidentiality (encryption)
o crypto keys dynamically exchanged via IKE
o anti-replay protection enabled

At the customer side, the Netscreen firewall is connected to a unique point called RAMSES Mediation
Device (Linux PC).
The remote commands from Alcatel-Lucent premises towards the NE of the monitored network are
issued from:
- RAMSES application servers on an isolated and protected sub-network (DMZ)
- RAMSES authorized users on the Alcatel-Lucent Intranet
These commands are controlled and relayed (when authorized) by a proxy software running on the
Mediation Device.

6.6.3 LAN / WAN HARDWARE RECOMMENDATIONS


For the WMS network, the following LAN / WAN switches are proposed:
- OmniSwitch 6850
Engineering Note: Omniswitch 7700
The Omniswitch 7700 switch has been removed from the OAM06 WMS solution portfolio.

6.6.3.1

OMNISWITCH 6850

The OmniSwitch 6850 is recommended for LAN connections including local client site connections
and for remote/local NE sites. The OmniSwitch 6850 is available in 2 models:
-

OS6850-48 with 48 auto-sensing Ethernet 10/100/1000Base-T ports


OS6850-24 with 24 auto-sensing Ethernet 10/100/1000Base-T ports

Characteristics:
- AC or DC power supplies
- 4 Ethernet 1000BaseX ports
- NEBS compliant
- Stackable to 8 switches (8*48=384 Ethernet ports)

Engineering Note: SF4x00


The WMS Main server implemented on the SF4x00 platforms require 2 or 4, (depending on the
configuration), 1 GB Ethernet ports. When optical GB Ethernet connections are needed (i.e. SF4x00),
it is required to use an alternate Alcatel-Lucent or any other vendor switch.
For additional details on the OmniSwitch 6850, please see http://www.alcatel-lucent.com
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

78

6.6.4 TERMINAL SERVER


For server console display, the terminal server is used to connect serial port A of the WMS servers to
the local LAN in order to assume its supervision through a Telnet protocol.
The recommended terminal server is the NEBS & RoHS compliant LX-4016T series from MRV. There
is no modem within this Terminal Server for PSTN dial-in remote access.
(LX8020S series are also supported but no more orderable)
Alcatel-Lucent recommended terminal server specifications for remote console display are included in
the table below:

Model
Interfaces

Power Supply

Specifications
MRV LX-8020S
2 Ethernet ports for connection
to LAN
20 RS-232 RJ45 ports for
connection to servers
2 redundant power supplies AC
or DC

MRV LX-4016T
2 Ethernet ports for connection
to LAN
16 RS-232 RJ45 ports for
connection to servers
2 redundant power supplies AC
or DC

Table 33 : Terminal Server Console Specifications

6.7.

OTHER EQUIPMENT

This section provides recommendations for optional equipment that can be added to the WMS
offering. Note that that equipment is not orderable through Alcatel-Lucent.
For printers needed as part of the WMS solution, the recommendation from Alcatel-Lucent is the
Lexmark printer with specifications in the table below. The driver for this printer is included in the WMS
software load. No integration from Alcatel-Lucent is available if other network printer chosen for WMS.

Hardware
Print Technology
Print Resolution
Print Speed
Processor
Standard Ports

Lexmark T632N Laser Printer


Laser, Monochrome
1200 dpi
Up to 38 pages per minute
350 MHz
10/100 Base-TX Ethernet, USB

Table 34 : Lexmark Printer Hardware Requirements


For UPS requirements for Uninterrupted Power Supply, the suggested UPS is APC Smart UPS
SURTD5000RMXLI.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

7.

79

NETWORK ARCHITECTURE

This chapter contains network architecture considerations for the WMS network. Also in the scope of
this chapter will be considerations for features which have impact on WMS network design.

7.1.

DEFINITION - NOC / ROC ARCHITECTURE

The National Operation Centre (NOC) provides a view of the whole network. A NOC is composed of
WMS Clients. From the WMS Clients in a NOC, it is possible to monitor several ROCs. This allows to
remotely manage a complete network from a single location.
A Regional Operation Centre (ROC) is designed to manage a region of the network. It is designed so
that it is autonomous and independent of the other ROCs and the NOC.
All OAM servers in a ROC are co-located at one site and it is recommended that they share the same
LANs (i.e. OAM LAN & Network Element LAN) and can provide an integrated view of the alarms and
performance counters for all the NEs managed by that ROC.
A WMS network is composed of the following components:
- Primary Main Server (Mandatory)
- WMS Client (one Mandatory)
- RAMSES (Mandatory)
- Secondary Main Server (Optional)
- Server of Clients (Optional)
- WQA server (Optional)
- NPO server (Optional)
- WPS Client (Optional)
- RFO tool (Optional)
- Other networking equipment (Optional)

7.2.

REFERENCE ARCHITECTURE

The OAM network represented in figure below serves as a reference to the network architecture
considerations that will be outlined throughout this chapter.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

80

Remote Access Users ALU Remote Access


NOC Stations

PC Clients

Remote ROC Cluster / NOC Cluster

UNIX Clients

PC Clients

Internet

Internet
IP Switch

VPN Switch

VPN Switch

UNIX Clients

External
OSS/BSS

IP/ATM
Switch
IP / ATM

Mediation
Device

OAM Network*
Primary
Main Server

Secondary
Main Server

NPO
Server/Servers

WQA Server

Public/Customer
Network

Local ROC Clients

Server of Clients

IP / ATM

IP / ATM

Terminal
Server

R*

NE Network*

IP/ATM
Switch

Legend:
_______ Ethernet
* OmniSwitch
UMTS Network Elements

Remote UMTS Network Elements

Figure 16 : Reference Architecture

7.3.

FIREWALL IMPLEMENTATION

Firewalls (and packet filters) are implemented for security reasons to enforce flow control to only allow
required communications in between different networks.
Firewalls support information (i.e. list of protocols used by the WMS Servers) is available for point to
point communications in the OAM network.
Recommendations representing deployment of firewalls are as follows:
-

Firewalls should be placed between the OAM Client network (NOC) and the rest of the OAM
network (mainly the server network).

Firewalls should be used on any communications path which goes from one site to another. This
would include the communication between the OAM-NE network on which the servers reside and
any remote OAM-NE networks.

There are no connectivity requirements in between the OAM interfaces of the NEs themselves (i.e.
the OAM interface on one NE doesnt need to communicate with OAM interfaces on other NEs,
they only need to communicate with the OAM servers).

Firewalls should be implemented between OAM server interfaces which lead to other non OAM
networks (example DHCP, DNS, centralized B&R interfaces on WMS servers, etc). This should be
based on a security assessment.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

81

Firewalls are not recommended within a ROC to subdivide the server network (OAM-OAM) or the
NE Network (i.e. there should be no firewalls in between the communication paths of the WMS
servers in a ROC). Only firewalls should be those leading to client network or remote NE network.

7.4.

NETWORK INTERFACES ON WMS SERVER

This section gives some high level information about standard hardware configurations that are used
in WMS servers. The specific OAM server hardware components are available via different predefined bundles. Therefore from an engineering and network design point of view these bundles are
not a variable which needs to be covered in the context of DCN architecture. An important note which
needs to be added is the fact that the WMS software installation is fully automated using scripts which
will not allow for variations of the hardware bundle outside of the scope of what has been defined by
WMS.
This section will outline the network connectivity variations for each WMS server.
Tables below define different server network interface configurations. The configurations define how
the different network segments can be connected directly to one or more server network interfaces.
Where an interface is used to connect to more than one type of network, the assumption is that these
networks are merged together into one single combined network (same subnet).
Interface number starts at 0 and go up to (n-1) where n is the total number of interfaces (example, for
the SF V8x0, the interface number go from 0 to 7 but given IPMP redundancy, we only need to specify
connectivity for the first half).

7.4.1 SERVER INTERFACE CONFIGURATION DEFINITIONS


Tables below show possible configuration of the WMS Main Servers network interface groups. Note
that not all configurations are supported by all server types.
Note The following list provides an understanding of the Communication Group field in the tables
below:
OAM_OAM For connectivity from WMS to other OAM servers, OAM clients (local or remote),
External OSS
OAM_NE - For connectivity from WMS to managed Network Elements
OAM_Backup - For connectivity from WMS to centralised backup systems (such as Veritas or Legato)
and Disaster Recovery synchronization
All Networks - For connectivity from WMS to any of the above mentioned components

Engineering Rule: IP addresses and subnet


The IP addresses used for each groups must be in different subnet

Network Interface Number

Communication Group

All Networks

Table 35 : Interface Configuration - Configuration A

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

82

Network Interface Number

Communication Group

0
1

OAM_OAM + OAM_Backup
OAM_NE

Table 36 : Interface Configuration - Configuration B

Network Interface Number

Communication Group

0
1
2
3

OAM_OAM
OAM_Backup
OAM_NE
Reserved for future use

Table 37 : Interface Configuration - Configuration C

Network Interface Number

Communication Group

0
1

OAM_OAM + OAM_NE
OAM_Backup

Table 38 : Interface Configuration - Configuration D

Network Interface Number

Communication Group

0
1
2
3

OAM_OAM
OAM_Backup
Citrix ICA Clients
Reserved for future use

Table 39 : Interface Configuration - Configuration E

Hardware model

SE T5220

Supported Interface
Configuration

Recommendations

A, B, C, D

For Test Bed and Small networks,


configuration A can be sufficient.
For medium Network, it is
recommended to use at least
configuration D.
For large and X-Large Networks it is
highly recommended to distinguish
carefully the flows by using
configuration C

NETRA T5440

A, C, D

M5000

A, C, D

Table 40 : Supported Interface Configurations per server type (Nominal)


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

83

OAM Application

Server Type

WMS Primary / Secondary


Main Server
WMS Primary / Secondary
Main Server

SF v8x0, SF v250,
SF4900 with quad gigabit
"copper" Ethernet cards.
SF 4x00 with optical
Ethernet cards, N240

Server of Clients

SF V8x0

Supported
Interface
Configuration
A, C, D

A,B
A, E

Table 41 : Supported Interface Configurations per server type (legacy)

Engineering Note: Interface Configuration for Security


For security Best practices, it is recommended to use configuration C in order to apply security rules
and policies (e.g.: firewall rules) against each Sub network according the nature of traffic.

Engineering Note: Interface Configuration for Server of Client solution


A server of Client solution can be deployed (option delivered through a service offer) relying on the
configuration E above.

Engineering Rule: Interface Configuration and IPMP


IPMP is mandatory on all WMS servers
Hardware
Platform
SF4800 with
X1032A SCSI
cards, N240
SF4800 with
X2222A or X4422A
SCSI cards
SF4900 (with
optical Ethernet
cards)
SF4900 (with Quad
Gb "copper"
Ethernet cards)
SF V880 with Quad
Fast Ethernet cards
SF V8x0 with Quad
Gb Ethernet cards
SF V250 with Quad
Gb Ethernet cards
SF V250 with Quad
Fast Ethernet cards

0
ce0

1
ce1

Network Interface Number


2
3
4
5
ce2
ce3
-

ce2

ce3

ce6

ce7

ce2

ce3

ce4

ce7

ce8

ce9

ce2

ce3

ce4

ce5

ce8

ce9

ce10

ce11

qfe0

qfe1

qfe2

qfe3

qfe4

qfe5

qfe6

qfe7

ce0

ce1

ce2

ce3

ce4

ce5

ce6

ce7

ce0

ce1

ce2

ce3

ce4

ce5

ce6

ce7

qfe0

qfe1

qfe2

qfe3

qfe4

qfe5

qfe6

qfe7

6
-

7
-

Table 42 : Interface Naming Convention


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

84

7.4.2 NETWORK INTERFACE SETTING


By default, network interfaces setting on the WMS Servers are configured to use auto-negotiate.
However, there might be some "auto-negotiate" interoperability issues between Solaris servers and
some switches or routers causing connectivity problems.
In the event of networking issues between WMS and the switch/router or if networking issues are
suspected, auto-negotiate should be disabled and duplex settings set to full duplex and speed to the
highest possible speed supported by the server and switch/router.
Engineering Rule: Interface Setting between WMS and Switch/Router
If In the event of networking issues between WMS and the switch/router or if networking issues are
suspected, auto-negotiate should be disabled and duplex settings set to full duplex and speed to the
highest possible speed supported by the server and switch/router.
It is mandatory that both the switch and the server be identically configured.

Possible problems include:


- Problems with basic connectivity - No link or switch port disables itself due to excessive errors
- Poor network performance (periodically slow response, periodic time-outs)
- In some instances auto-negotiation is incompatible and connection can never be achieved
Please see WMS release notes or/and contact WMS Product support for additional details including
steps to manually configure duplex settings.

7.4.3 SERVER NETWORK RESILIENCY OVERVIEW (IPMP)


This section provides a summary of the WMS implementation of Solaris IPMP.
Solaris IPMP provides an automated and transparent mechanism to increase the resiliency of an OAM
server's network adapter. IPMP provides:
-

Failure Detection - Ability to detect when network adapter has failed and failover to the alternative
network adaptor
Repair Detection - Ability to detect when a network adapter that has previously failed has been
repaired and fallback

The implementation of Solaris IPMP covers the following four types of communication failures:
-

The transmit or receive path of the interface has failed.


The attachment of the interface to the IP link is down.
The port on the switch does not transmit or receive packets.
The physical interface in an IPMP group is not present at system reboot.

The WMS servers provide physical redundancy on the network interface cards by adding a redundant
card. In the WMS context, each group of interfaces has active interface(s) (on the primary network
interface card) and corresponding failover standby interface(s) (on the second network interface card).
IPMP on the WMS Servers are configured in Active - Standby mode. In this mode, only one network
interface in a group is activated while the other standby interface has no traffic going over it besides
the traffic required to test the health status of the interface.
IPMP in OAM06 is performed using a new Solaris 10 feature that is link-based failure detection rather
than the probe-based failure detection in previous releases. In link-based failure detection, IPMP
daemon (mpathd) watches for standard "Link Up"/"Link Down" reports from the network adapter. If the
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

85

adapter reports "Link Down", it moves the IP configuration for that interface to another adapter in the
same group that has a link.
The advantage of using the link-based IPMP is that it does not require Test IP addresses. It only
requires the use of a single Data IP address that is used for the standard server communication and
migrates automatically between interfaces in the event of an interface failure. The Standby Interface
does not have a data IP address assigned. Should the Active Interface fail, the data address will
migrate to the Standby Interface.
Rules which apply to OAM server interface IPMP groups are:
-

All interfaces in an IPMP group must have unique MAC addresses.


All interfaces in an IPMP group must be of the same media type.
IP addresses do not need to be assigned to unused or optional interfaces (and do not need to be
cabled to switch or router). However, if possible, it would be ideal if these could be planned ahead
of time and IPs reserved to avoid having to re-work the OAM IP plan.

7.4.4 OTHER WMS IP ADDRESS REQUIREMENTS


This section defines the IP requirements and engineering rules for other interfaces of the WMS
Server.

7.4.4.1

DISK ARRAYS (T3)

Disk arrays for the SF 4800 require one IP each. This Interface is only used for configuration and
administration of the disk array and is not used for the disk write data in between the OAM server and
the disk array (which is actually done through a dedicated dual fibre channel optical connection).
The Disk Array IP addresses should be on the same subnet as the server network (OAM-OAM).

7.4.4.2

DISK ARRAYS (STOREDGE 6120)

The two SE 6120 disk arrays used with the SF 4900 have 1 Ethernet connection each; however, they
use multipathing and therefore only require one IP address for both. These Interfaces are only used
for configuration and administration of the disk array and is not used for the disk write data in between
the OAM server and the disk array (which is actually done through a dedicated dual fibre channel
optical connection).
The Disk Array IP address should be on the same subnet as the server network (OAM_OAM).

7.4.4.3

DISK ARRAYS (STOREDGE 6140)

The ST6140 contains two Ethernet boards (Dual Ethernet 10/100BaseT cards) hosted in the controller
tray. In the context of WMS, one pair of Ethernet ports are used and connected to the OAM Ethernet
LAN.
These Interfaces are only used for configuration, administration and supervision of the disk array and
are not used for the disk write data in between the OAM server and the disk array (which is actually
done through a dedicated dual fibre channel optical connection).
They do not support IPMP multipathing. Two physical IP addresses are provisioned (one for each
card) and managed internally by the CAM (Common Area Manager). CAM is the unique entry point to
perform the administration and the configuration of the ST6140 (alarms are also sent to Sun MC
through CAM). Since the data IP@ are know by CAM (declared at I&C time), the usage of an effective
IP@ is managed by CAM.
The Disk Array IP addresses should be on the same subnet as the server network (OAM_OAM).

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

7.4.4.4

86

ST 2540 DISK ARRAYS

The ST2540 is used with M5000. It contains two Ethernet boards (Dual Ethernet 10/100BaseT cards)
hosted in the controller tray. In the context of WMS, one pair of Ethernet ports are used and connected
to the OAM Ethernet LAN.
These Interfaces are only used for configuration, administration and supervision of the disk array and
are not used for the disk write data in between the OAM server and the disk array (which is actually
done through a dedicated dual fibre channel optical connection).
They do not support IPMP multipathing. Two physical IP addresses are provisioned (one for each
card) and managed internally by the CAM (Common Area Manager). CAM is the unique entry point to
perform the administration and the configuration of the disk (alarms are also sent to Sun MC through
CAM). Since the data IP@ are know by CAM (declared at I&C time), the usage of an effective IP@ is
managed by CAM.
The Disk Array IP addresses should be on the same subnet as the server network (OAM_OAM).
Engineering Rule: IP addresses for ST2540 Disk Array
The ST2540 Disk Array IP addresses should be on the same subnet as the server network (OAMOAM).

7.4.4.5

SYSTEM CONTROLLERS (SF4X00)

The SF 4x00 servers uses System Controller cards to reboot and configure the system. These cards
are used in a 1+1 redundancy configuration. With two System Controller cards in each system, if one
System Controller card fails, the other System Controller card can take control of the system without
causing a disruption in the system operation.
Each one of these requires 1 IP address. 1 supplemental IP address is required for the active SC card
and is also referred to as the SC logical IP address. This logical IP will automatically switch over to the
standby one upon failure of the active card. This concept is not to be confused with IPMP. Both
system controllers have a valid IP address by which they can be reached. The active system controller
card can also be reached by the logical IP address. Therefore a total of 3 IP addresses will be
required for the system controllers on the SF4x00.
The system controller IP addresses should be on the same subnet as the server network (OAM-OAM).

The following picture summarizes the Ethernet connectivity for an integrated E4900 with system
controller cards and ST6140 disk array:

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

87

Figure 17 : Example of E4900 with System controller and ST6140 connectivity

7.4.4.6

SYSTEM CONTROLLER (SF V250 / N240)

The SF V250 and Netra 240 servers also have a SC card. Same description as for the SF4x00 except
that the SF V250 and Netra 240 only have 1 SC card, therefore the concept of logical IP does not
apply. Only 1 IP Address is required.

7.4.4.7

XCSF SYSTEM CONTROLLERS (M5000)

The M5000 uses one System Controller card to reboot and configure the system. XSCF (EXtended
System Control Facility) is the system management firmware that is preinstalled on the system
controller (SC) of the M5000 server. XSCF provides a browser-based web interface, a command-line
interface, an SNMP user interface and includes also a SunMC Agent Platform.
The XSCF controller card contains also a Sun MC agent to communicate with the Sun MC server
(running on the M5000 Primary main server) though dedicated ports 10.
One IP address is required for the XSCF-LAN interface (a 100-BASE-T Ethernet connection with one
cable is required) and no redundancy is available in the XSCF controller card. (The Serial link on the
controller card can still be reached though a console server)
The system controller IP address should be on the same subnet as the server network (OAM-OAM).
Engineering Rule: IP address for XCSF System Controller
The system controller IP address should be on the same subnet as the server network (OAM-OAM).

10

: Please refer to " UMT/OAM/APP/024293 Alcatel-Lucent 9353 WMS - Ports and Services", for the
list of ports used within the WMS ROC Perimeter.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

88

Figure 18 : M5000 with System controller and ST2540 connectivity

7.4.4.8

ILOM SYSTEM CONTROLLERS (NETRA T5440)

In addition to its Quad Ethernet Interface Cards, the NETRA T5440 server uses its ILOM card to
manage and configure the system. Integrated Lights Out Manager (ILOM) is system management
firmware that provides a browser-based web interface and a command-line interface, as well as a
Syslog interface, an SNMP user interface and an IPMI user interface.
The NETRA T5440 SP-ILOM card has a 100-BASE-T Ethernet connection, 1 cable is required.
It requires one IP address and no redundancy is available in the XSCF controller card. (The Serial link
on the controller card can still be reached though a console server)

Figure 19 : Example of NETRA T5440 with System controller connectivity

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

89

Engineering Rule: IP address for ILOM System Controller


The system controller IP address should be on the same subnet as the server network (OAM-OAM).

7.4.4.9

ILOM SYSTEM CONTROLLER (T5220)

The same ILOM system controller card is available for the T5220 hardware machine. (see figure
above)

7.5.

WMS SERVER IP ADDRESS REQUIREMENTS

As a summary of the above sections, the following table provides the IP address requirements for
different supported WMS hardware:

Network
interface
Groups

Hardware
Platform

SC
ST2540 Min IP
Max IP
Cards
Address Address

SE T5220

N.A.

N.A

SUN
NETRA
T5440
SE M5000 4 CPU

N.A

N.A

N.A

N.A

SE M5000 8 CPU

Table 43 : WMS IP Requirements Summary

7.6.

NETWORK INTERFACES ON NPO / MS PORTAL SERVER

The NPO (or MS Portal) server communicates with the Primary and Secondary Main server and NPO
(or MS Portal) Clients. It can optionally communicate to centralized backup systems such as
Legato/VERITAS.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

90

7.6.1 INTERFACE CONFIGURATION ON NPO OR MS-PORTAL


SERVER
7.6.1.1

FOR T5220 AND T5440 SERVERS

The network segmentation for a standalone NPO or MS PORTAL on T5220 or T5440 server can be
configured as:
-

An Ethernet port for OAM-OAM network


An Ethernet port (optional) for OAM-BR network.

Network Interface Number

Subnet Group

Communication to:

0
1

OAM-OAM
OAM-BR

Clients and other OAM Servers Interface


Centralized Backup & Restore

Table 44 : Interface configuration on NPO or MS-Portal


NPO and MS-NPO supports the link-based IPMP functionality.
Engineering Rule: IP addresses and Subnet
The IP addresses used for each groups must be in different subnet

Engineering Note: System Controller


A separate IP address should also be considered for the system controller card (ILOM) specific to the
Sun Enterprise T5220 and T5440. Please refer to previous section for more information.

As a summary of the above sections, the following table provides the IP address requirements for
different supported NPO or MS PORTAL hardware:

Hardware Platform

Network interface
Groups

SC
Cards

ST2540

Min IP
Max IP
Address Address

SE T5220

0
1

1
1

2
N.A.

3
N.A.

N.A

SUN NETRA T5440

N.A

N.A

N.A

(applicable to MS SUP only)

Table 45 : T5220/T5440 NPO / MS PORTAL IP Requirements Summary


7.6.1.2

FOR M4000-2CPU SERVERS

The network segmentation for a standalone NPO or MS PORTAL on a M4000-2CPU server can be
configured as:
-

An Ethernet port for OAM-OAM network.


An Ethernet port (optional) for OAM-BR network.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

91

Network Interface Number

Subnet Group

Communication to:

0
1

OAM-OAM
OAM-BR

Clients and other OAM Servers Interface


Centralized Backup & Restore

Table 46 : Interface configuration on NPO or MS-Portal


NPO and MS-NPO supports the link-based IPMP functionality.

Engineering Rule: IP addresses and Subnet


The IP addresses used for each groups must be in different subnet

Engineering Rule: M4000 Disk Array ST2540 IP Address


The two IP@ of the ST2540 for M4000 are set automatically with default values (192.168.128.101 &
192.168.128.102), and the ST2540 has direct Ethernet connectivity with M4000.
Therefore there is no specific IP@ addressing requirements for the external storage within the OAM
LAN.

Figure 20 : M4000 with System controller and ST2540 connectivity

Engineering Note: System Controller


-

A separate IP address should also be considered for the system controller card (XSCF) specific to
the Sun Enterprise M4000. Please refer to previous section for more information.

As a summary of the above sections, the following table provides the IP address requirements for
different supported NPO or MS PORTAL hardware:

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

92

Hardware Platform

Network interface
Groups
0
1

SE M4000 - 2 CPU

1
1

2
N.A

3
N.A

SC
Cards

ST2540

(2)

Min IP
Max IP
Address Address

(applicable to NPO or MS NPO only)

: two IP@ of the ST2540 for M4000 are set automatically with default values (192.168.128.101 & 192.168.128.102), and the ST2540
has direct Ethernet connectivity with M4000.

Table 47 : M4000-2CPU NPO / MS PORTAL IP Requirements Summary


7.6.1.3

FOR M4000-4CPU SERVERS

For M4000-4CPU, the NPO server needs to be installed in a cluster mode as a Master node.
The network segmentation for a cluster mode NPO or MS PORTAL on a M4000-4CPU server can be
configured as:
-

An Ethernet port for OAM-OAM network.


An Ethernet port (optional) for OAM-BR network.

Network Interface Number

Subnet Group

Communication to:

0
1

OAM-OAM
OAM-BR

Clients and other OAM Servers Interface


Centralized Backup & Restore

Table 48 : Interface configuration on NPO or MS-Portal


In a NPO Cluster, the following IP addresses are used to interact with the data source (WMS Servers),
NPO Clients and between NPO servers:
-

Floating IP address - This is the NPO cluster IP address as seen from the NPO clients; it is the
"external" IP address of the master node and the only known IP to the NPO clients.
Public IP address - This is the default IP addresses of the NPO servers and the IP address to
interact with the WMS Servers. Note that all NPO servers in a cluster communicate with the WMS
servers. The Public IP address is the address assigned during the installation of Solaris on NPO
server.
Private and Virtual IP address - These are IP addresses reserved to the Oracle RAC clustering
activities and communication between the NPO servers in a cluster.

The table below explains the subnet to which each of the IP addresses in a NPO Cluster belong to.
Network Interface
Number
0

Subnet Group

Contains IP Address

OAM-OAM

OAM-BR

Public IP Address
Floating IP Address
Virtual IP Address
B&R IP Address

Private IP Address

Communication to:
WMS Servers
NPO Clients
Other NPO Servers
Centralized Backup &
Restore
Other NPO Servers

Table 49 : Subnet and IP Addressing configuration on NPO or MS-Portal


NPO and MS-NPO supports the link-based IPMP functionality.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

93

Engineering Rule: IP addresses and Subnet


-

The IP addresses used for each groups must be in different subnet

Private IP address should be on a private subnet different to the Public/Floating/Virtual IP


addresses and can be assigned an internal IP address leaving it as default during install as
192.168.0.11.

Engineering Rule: M4000 Disk Array ST2540 IP Address


The two IP@ of the ST2540 for M4000 are set automatically with default values (192.168.128.101 &
192.168.128.102), and the ST2540 has direct Ethernet connectivity with M4000.
Therefore there is no specific IP@ addressing requirements for the external storage within the OAM
LAN.
Engineering Note: System Controller
-

A separate IP address should also be considered for the system controller card (XSCF) specific to
the Sun Enterprise M4000. Please refer to previous section for more information.

Figure 21 : M4000 with System controller and ST2540 connectivity

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

94

Public, floating and virtual IP are defined over nxge0

SC IP@

Subnet OAM_OAM
Redundancy through IPMP

SC

Connection to external
storage ST2540 with two
internal IP@ for controller
A and controller B
(Used for administration
purpose only. The rest
passed over the FC cables
as usual)

Subnet OAM_BR

nxge 4

nxge 0

nxge 5

nxge 1

nxge 6

nxge 2

nxge 7

nxge 3

If no slave is yet available, a


loop has to be done between
nxge2 and nxge3

To be connected to
Slave node.

Private IP@ must be declared over


nxge3. A loop to nxge2 is required if
no slave is yet available

Figure 22 : Magnified View of M4000-4 CPU Interface connectivity in Cluster Mode


As a summary of the above sections, the following table provides the IP address requirements for
different supported NPO or MS PORTAL hardware:

Hardware Platform

SE M4000 - 4 CPU

Network interface
Groups
0
42

1
1

2
N.A

3
N.A

SC
ST2540
Cards

(2)

Min IP
Max IP
Address Address

(applicable to NPO or MS NPO only)

: two IP@ of the ST2540 for M4000 are set automatically with default values (192.168.128.101 & 192.168.128.102), and the ST2540
has direct Ethernet connectivity with M4000.
2
: Of this, the Private IP address is an internal IP address and can be left as default such as 192.168.0.11 at time of installation

Table 50 : M4000-4CPU NPO / MS PORTAL IP Requirements Summary


Finally, the <server_hostname> (e.g. paris) configured during NPO installation is the cluster hostname
assigned to the NPO cluster and should be associated to the Floating IP Address.
The Master Server hostname is the <server_hostname> post-suffixed with 1 (e.g. paris1) where 1 is
the Cluster Node Number for a Master server in NPO Cluster mode.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

95

7.6.2 NPO CLUSTER (SFV490-4CPU)


Engineering Note: Machine applicable to NPO cluster configuration
NPO cluster configuration is applicable to legacy machines SFV490 4 CPU only.

In a NPO Cluster, the following IP addresses are used to interact with the data source (WMS Servers),
NPO Clients and between NPO servers:
-

Floating IP address - This is the NPO cluster IP address as seen from the NPO clients, it is the
"external" IP address of the master node and the only known IP to the NPO clients.
Public IP address - This is the default IP addresses of the NPO servers and the IP address to
interact with the WMS Servers. Note that all NPO servers in a cluster communicate with the WMS
servers.
Private and Virtual IP address - These are IP addresses reserved to the Oracle RAC clustering
activities and communication between the NPO servers in a cluster.

In addition, there is an IP address for centralized backup and restore connectivity.


The figure and table below explains the subnet to which each of the IP addresses in a NPO Cluster
belong to.

OAM-BR Group
NPO Cluster Group
OAM-OAM Group

NPO
Master
Server

NPO
Slave
Server

WMS WMS Secondary


Primary Main Server
Main Server

NPO
Client

Figure 23 : Subnet Groups in a NPO Cluster

Network Interface
Number
0

Subnet Group

Contains IP Address

OAM-OAM

1
2

NPO Cluster
OAM-BR

Public IP Address
Floating IP Address
Virtual IP Address
Private IP Address
B&R IP Address

Communication to:
WMS Servers
NPO Clients
Other NPO Servers
Other NPO Servers
Centralized Backup &
Restore

An additional IP address each is required for access and management of the Brocade Silkworm 300
FC switch, this IP address can reside on the OAM-OAM subnet group.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

96

NPO server in cluster supports link-based IPMP functionality.


Engineering Note: Firewall Implementation
There should be no firewall implemented on the NPO Cluster Group as it may affect the
communication of the NPO servers in cluster.

7.6.3 FIBRE CHANNEL CABLING CONNECTIVITY


In a NPO cluster, the master and slave nodes along with their associated disk arrays are physically
connected using a Fibre Channel Switch Brocade Silkworm 300.
NPO servers in a cluster is only supported on a SF490-4CPU server with SE3510 disk arrays, they are
not supported for any other NPO server hardware configuration.
All NPO nodes (Master and Slave) communicate to all the disk arrays in a cluster.
Figure below describes the cabling done between the SF490-4CPU servers, SE3510 disk arrays and
Fibre Channel Switch. For more information on connectivity and configuration of the NPO servers and
disk arrays to the Brocade switch, please refer to document "Cabling Description for SUN Machines,
3BK 17430 4005 RJZZA"

Brocade Silkworm 300 FC Switch


Public IP

2 0

4 6

SE3510 FC0

OAM-OAM
NPO Cluster
OAM-BR Group

FC0 SE3510

Master

Slave

SF490

SF490

PC10

PC10

OAM-OAM Group
NPO Cluster Group
OAM-BR Group

Figure 24 : NPO Cluster Fibre Channel Switch Connectivity


In a cluster scenario, redundancy can be added to the switch by using two fibre channel
switches so that in case one switch goes down, data can be transferred over the second
redundant switch. Figure below illustrates this scenario.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

97

NPO

SE3510
FC Switch

NPO

SE3510

NPO

SE3510
Non Redundant FC Switch

NPO
FC Switch

SE3510

NPO
SE3510
NPO

FC Switch
SE3510
Redundant FC Switch

Figure 25 : NPO Cluster Fibre Channel Switch Redundancy

7.7.

NETWORK INTERFACES ON WQA SERVER

WQA server requires 1 IP address on the OAM_OAM communication group. For more information on
WQA, please refer to W-CDMA Quality Analyser (WQA) section

7.8.

NETWORK INTERFACES ON CLIENTS

OAM Clients are used to run different OAM applications including WMS GUI, WQA client, NPO Client,
SDA tool, RFO tool and WPS tool. One or multiple OAM Clients can be used to access the different
OAM applications based on Windows or Unix. These clients require 1 IP address each on the
OAM_OAM communication group. For more information on clients, please refer to WMS Clients and
Server of clients Engineering Considerations section.

7.9.

NETWORK INTERFACES FOR REMOTE ACCESS

Remote access is provided in three scenarios:


-

Server Console Access This is provided by using a Terminal Server such as LX8020S series
from MRV.
VPN Remote Access for Alcatel-Lucent Support This is provided by RAMSES VPN solution
VPN Remote Access for Customers users This is provided by Alcatel-Lucents Brick VPN
solution.

Terminal Server Usage


Terminal Servers provide serial-based connectivity for a wide range of devices. From a workstation
with a telnet client, an operator can access serial ports of the Sun servers installed as part of the OAM
network.
The Terminal Server IP Address is required on the OAM_OAM communication group.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

98

Terminal
Server

Screen less Servers

Other Devices
with Serial

OAM Server
Serial Link
Ethernet

Telnet

Figure 26 : Terminal Server Connections


VPN Remote Access
For VPN remote access connectivity, please refer to Security and Remote access section.

7.10. OTHER NETWORKING CONSIDERATIONS


The following paragraphs highlight considerations on OAM Network Parameter Changes, Static
Routes and Network Address Translation as well as protocol information to the Network Elements.

7.10.1 SOUTHBOUND INTERFACE PROTOCOLS BETWEEN WMS


AND NE
The following table provides the protocols used between the WMS server and the Managed Network
Elements.
Network Element Type

Protocols Used
SEPE (Proprietary), FMIP
(Proprietary),FTP, Telnet, RADIUS, IKE
(IPSec), NTP, ICMP

RNC

NODE B
Models: 931x, 932x, 933x and 934x

SEPE (Proprietary), FTP, Telnet, RADIUS,


HTTP (TIL), SNMP
FTP, SFTP (SSH),Telnet, HTTP (LMT),
CORBA, SYSLOG,

NODE B
Model: 939x

FTP, Telnet, SNMP


ESE - RSP 7670

Table 51 : Protocols used on southbound Interfaces


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

99

Engineering Note : Firewall


Firewall rules between OAM and Network Element (OAM_NE interface) should be applied according
to the Network Element Type.
Please see " UMT/OAM/APP/024293 Alcatel-Lucent 9353 WMS - Ports and Services", for the list of
ports used within the WMS ROC Perimeter.

Engineering Recommendation:
NODE B.

Security Requirements for customers NOT having the 939x 11

For network without 939x NODE B Models, It is recommended to block the SYSLOG port.

7.10.2 OAM NETWORK PARAMETER CHANGES


Changing network parameters (IP Address and Hostname) of any OAM servers can have major
impacts to the overall network. The following needs to be considered and carefully planned prior to
engaging with the network parameter change process:
-

All NEs communicating with the WMS servers in the ROC need to also point to the new IP
address.
Static routes on other servers / NEs need to be updated as required.
The IP address of the new default router must be known.
All firewalls and packet filters between the servers and NEs or between server and clients need to
be configured with the new IP Address of the WMS Servers.
The VPN remote access solution must also be verified and re-configured as necessary to ensure
that the Servers with the new IP addresses will be reachable after the networking changes have
been applied. In particular, the firewall rules in the Secure Router used for remote access need to
be updated with the new IP addresses of the WMS servers.

It is recommended that the WMS CIQ be filled out with all the new networking information prior to the
network parameter change procedure as if it was a new installation.

7.10.3 STATIC ROUTES


Static routes must be identified and configured for each network/subnet that must be reachable by the
WMS servers through a route other than the default route. This information must be provided when
completing the CIQ.
With multi-homed servers, it is more likely that static routes will need to be configured to ensure that
packets transit through the correct interface.
E.g. consider a multi-homed WMS Main server with one network interface on the OAM network and
another interface on the NE network.
It is important that all NE traffic go through the network interface on the NE network as there could be
some scenarios where traffic from a network element arrived on the interface on the NE network and
returns to the NE through the interface on the OAM network where the default router is located (if
there is a path from the OAM network to the NE network). Scenarios like this can cause many subtle
problems such as problems with firewalls only seeing one direction of a connection.

11

: The list of 939x Node B models is: 9391, 9392, 9393 and 9396.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

100

7.10.4 NETWORK ADDRESS TRANSLATION


Network Address Translation (NAT) is currently not supported in WMS. Some client-server
applications in WMS are known not to work properly if there is a NAT device between them.
The following solutions should be considered (for the WMS Client <-> WMS Server link) if/when
deploying WMS in a network environment using NAT:
-

Usage of a Server of Clients (Citrix).


VPN solution using Alcatel-Lucent VPN Router.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

8.

101

BANDWIDTH REQUIREMENTS

This chapter describes the bandwidth requirements for WMS. Bandwidth requirements between the
WMS Clients and the ROC, between the ROC and the Network Elements and OSS bandwidth
requirements will be described in this section.

8.1.

BANDWIDTH CONSIDERATIONS WITHIN THE ROC

Within the Network composed of WMS servers, the Ethernet LAN which provides the first level
connectivity to the OAM servers (i.e. WMS Primary / Secondary Main Server, WQA, NPO, etc.) must
operate at 1000 Mbps (1 Gbps).
Engineering Recommendation: Routing Switch and bandwidth considerations
Every Ethernet port of every server must be connected to an Ethernet switch through a 1000 Mbps.
The 1000 Mbps LAN should be extended to the Routing Switch which provides aggregation of the
ATM/WAN interfaces to remote OAM networks and routing.

The provisioning of bandwidth for these ATM circuits should be based on the bandwidth specifications
provided in the bandwidth requirements section of this document. For a given link an assessment first
needs to be made to determine what are the major communication flows that need to go through on a
point to point basis. For each one of these point to point pairs, the required bandwidth should be
looked up and added. The total will provide the required bandwidth for that link.
In addition, extra considerations will be required for the software download link given the high number
of NODE B and the parallel download feature supporting several simultaneous downloads.

8.1.1 BANDWIDTH CONSIDERATIONS


INTERFACE CONFIGURATIONS

FOR

THE

MULTIPLE

From a bandwidth perspective using multiple networks would be recommended in the following cases:
-

Use (or plans to use in the future) of the centralized backup and restore features which perform
the backup over the network (multiple networks mandatory in this case with a separate network for
BU&R) as a backup can involve transferring hundreds of gigabytes of data over the network.

In a ROC managing many NEs and which has a high number of users it would be recommended
to separate client traffic from NE facing traffic for improved GUI response.

8.2.

BANDWIDTH REQUIREMENTS BETWEEN THE ROC AND THE


NES

This section gives the OAM bandwidth requirements between a ROC and different NE types.
In general, performance data normally dominate the bandwidth requirements from the NEs to the
OAM servers. Fault management requires much less than what is required for performance
management but consists of more bursts. Software downloads from the WMS Main Server to the NEs
which support software download will be bandwidth intensive. A trade-off needs to be made between
the bandwidth requirements for software download and the response time of a software download
given that it is not performed on a regular basis.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

102

With the exception of software download or similar bulk provisioning features, most of the
management traffic originates at the NEs and terminates on the OAM servers and therefore the
bandwidth figures given in this section are for those flows in this direction. However, it is
recommended that the provisioning of the bandwidth of the links to the NEs be symmetrical in order to
support software downloads.
Unless noted otherwise, bandwidth requirements listed in this section are for all FCAPS functions and
combined for the ROC.
All bandwidth requirements in this section represent the effective throughput required.

8.2.1 LATENCY
For the links to the NEs it is recommended to keep latency below 500 msec and for the links to the
clients, we recommend keeping latency below 100 msec in order to not impact WMS client
performance.
Firewalls can be responsible for the biggest introduction of latency in some networks. In order to
ensure that firewalls do not degrade latency, it is important to ensure that they are adequately
dimensioned to meet the traffic loads.
The bandwidth requirements listed in this section represent the throughput required for TCP
communications. Data transfer tests using FTP can be used to validate that a given link meets the
required specifications.

8.2.2 UTRAN ACCESS BANDWIDTH REQUIREMENTS


Bandwidth requirement for the management of UMTS Access NEs (RNC CN, IN, POC and NODE B)
are dominated by the requirements for performance management (RNC CN observation counters and
call trace) and software downloads.
Bandwidth requirement for fault management and configuration management of all UMTS Access
NEs and even that of the performance management (observation downloads) of the RNC IN, POC
and NODE B are negligible compared to the requirement for the download of RNC CN observation
counters and call trace.
The requirement for Call trace and the requirement for downloading RNC CN observation counters
must be added.
The traffic generated by software download operations being in the opposite
direction, (i.e. ROC-->NE instead of NE-->ROC), the requirement for SW download does not have to
be added to the requirement of call trace and RNC CN observation counter download requirements.
The sum of the requirement for call trace with the requirement for RNC CN observation counter
download are requirements in one direction (from NE-->ROC), given a symmetrical full duplex link, the
same amount of bandwidth will be available in the reverse direction (from ROC-->NE) for software
download operations.
The following sections give the BW requirement for call trace, RNC CN observation counter
downloads and software download operations.

8.2.2.1

CALL TRACE

The bandwidth requirements for call trace can vary greatly based on the usage of this functionality and
based on the configuration of the NEs (i.e. number of TMUs), the load on the UMTS Access network
(number of active subscribers, cells, etc.) and the type of template used when invoking call trace can
have a major impact on the quantity of data generated by the UA NEs from the call trace feature. The
numbers given below are worst case examples of the amount of data generated by call trace. Should
the amount of data generated from CT exceed that which can be transferred over the link from the
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

103

NEs to the WMS Main server, the data will accumulate on the NEs. Given there is a limit to how much
CT data can be accumulated on the NEs, as of UA4.2/OAM5.0, there is a protection mechanism on
the NEs and the OAM server to prevent overflow (CT data could be discarded however).

Type of Call trace

Maximum Bandwidth
Requirement (kbps)

1 CTg or OTCell
1 CTa or CTb

1000
100

Table 52 : Bandwidth Requirements for RNC Call Trace (maximum value)

8.2.3 RNC CN OBSERVATION COUNTERS


The amount of time that can be allocated / allowed for the RNC CN observations to be downloaded
will impact the amount of bandwidth required to support the download. There is a trade off that will
need to be made in between the cost of the required bandwidth and the latency for downloading the
observation files. This file transfer delay will directly result into extra delay for the conversion into XML
files and can possibly add delay to the importation into NPO (or to an OSS). Recommended download
times for the RNC observations files can range from 30 seconds to 5 minutes (it is not recommended
to base the network engineering on download times which exceed 5 minutes).
The formula below gives the bandwidth requirements for the observation download for one RNC CN.
40 ( 11000 + ( 12400 RncN ei ) + ( 2700 FD DCells) )
---------------RncBandwidthRequired = -------------------------------------------------------------------------------------------------------------------------------DT

Where:
FDDCells = Number of FDDCells managed by this RNC
RncNei = Number of Neighboring RNCs to this RNC
DT = allocated time for a download of a RNC_CN in seconds.
Examples of the application of this equation with DT=150 seconds are given in the table below:
FDDCells

Neighbouring RNCs

100
300
300
600
600
900
900
1200
1200

3
4
8
5
10
8
16
10
20

BW Requirement
(kbps)
83
228
241
443
459
665
691
884
916

Table 53 : Bandwidth Requirements for RNC CN Observation counters


The current engineering guidelines for the bandwidth requirements for the performance management
of UMTS Access NEs focus on minimizing the bandwidth requirement specification. One hypothesis
made is that the RNC CN, IN and POC observation files are not downloaded at the same time (these
are downloaded once the files are ready to be transferred from the NE). It has been observed on live
networks that these files are generally not ready to be downloaded at the same time. However,
observation files downloaded from the RNC IN and POC are relatively small compare to those of the
RNC CN and IN. As an example, for an RNC which has 300 FDDCells and 5 Neighbouring RNCs, the
maximum size of the IN and POC file sizes combined is only about 15% of that of the RNC CN and
therefore if they were all to be downloaded at the same time, the time required for the download of all
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

104

files would only be about 15% longer than that which was predicted by the DT variable used in the
previous equation (which was only for RNC CN observation files).
The same considerations apply to the download of the NODE B observation files. The bandwidth
requirement for the download of the NODE B observation files are already small given that they are
normally distributed over the hour and that an algorithm is used spread the downloads over multiple
RNCs. Given that the download of the RNC CN observation file only takes a few minutes out of every
15 minutes, there is unused bandwidth during the rest of the time to allow for a rapid download of
these.
Therefore, the total bandwidth per RNC, including the CN, IN and the POC, should be equal to the
bandwidth required for the support the download of the RNC CN observation counters and optionally
the bandwidth required to support the call trace functionality.
The following rule recommends a minimum limit for the amount of bandwidth that should be available
on the link from the RNC to the OAM:

Engineering Recommendation: Throughput and Bandwidth Requirements


The Minimum Throughput Requirement per RNS is 1 Mbps. This applies to RNC with PSFP/DCPS
and CP3 board.
For RNC with the new CP4 board 12 supporting more Node and cells, the Minimum Throughput
requirements between WMS and such RNS is 2 Mbps.
To comply with the throughput requirement, the Ethernet port of WMS server on OAM_NE interface
should be connected to an Ethernet switch through a 1000Mbps connection.
The 1000 Mbps LAN should be extended to the Routing Switch which provides aggregation of the
ATM/WAN interfaces to remote Network Elements.

8.2.4 SOFTWARE DOWNLOAD


Typical software sizes for the RNC and NODE B are provided in Software Download section in
Chapter 3. The table below provides the maximum number of simultaneous downloads possible based
on the WMS server type. Exceeding the numbers below can cause queuing and will be downloaded
sequentially.

Main Server Type

Node-B

RNC

SF E4900 - 12 CPU
SF4800 - 12 CPU
SF E4900 - 8 CPU
SF4800 - 8 CPU
SF V890
SF V880
SF V250/N240

48
24
32
16
16
8
4

6
3
4
2
2
1
1

Table 54 : Maximum number of simultaneous software downloads


Depending on the number of simultaneous downloads per server type, software size per NE and the
bandwidth allocated per NE type, the time taken to download software to all the NEs can be
determined.
12

: Please refer to the RNC Product Engineering guide UMT/IRC/APP/13736 to get RNC
configuration information.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

105

Note that although it is possible to simultaneously download software to multiple NODE B and RNC, it
is recommended, in order to minimize time required for software download operations for the entire
network, to spread download operations of the NODE B across different RNCs. Software download
to multiple NODE B, associated to the same RNC simultaneously with an RNC software download
represents the worst case.

8.2.5 SYSLOG MESSAGE (APPLICABLE TO 939X MODELS)


This section applied to Network with 939x 13 NODE B models managed by WMS Large server E4900
with 8 or 12 CPUs.
The 939x NODE B uses WMS servers as disk storage array to store some logs generated by the 939x
NODE B applications. The SysLog messages are pushed to WMS through UDP protocol and are
stored in circular files.

The following table provides the typical SysLog message characteristics per NODE B.

SysLog characteristics

Daily syslog data size (KB)


Syslog message average
size (bytes)
Average messages/day

110
214
526

Table 55 : SysLog message characteristics

Based on this average values, the additional workload within a WMS Large server E4900 (8 or 12
CPU) is negligible. It does not affect the WMS performances in terms of disk usage, average network
and Disk I/O.

8.3.

BANDWIDTH REQUIREMENTS BETWEEN THE ROC AND THE


CLIENTS

Engineering Recommendation: Throughput and Bandwidth Requirements


The Minimum Throughput Requirement per WMS Client is 1 Mbps. For network constraints in
term of bandwidth capabilities, a SOC (server of client) solution should be considered.
To comply with the throughput requirement, every client should be connected to an Ethernet switch
through a 100/1000Mbps connection.

13

: The list of 939x Node B models is: 9391, 9392, 9393 and 9396.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

106

8.3.1 OTHER ENGINEERING CONSIDERATIONS


-

For NPO Client bandwidth requirements, refer to section 11.10 in the NPO Chapter.

WQA Client is a web based client and requires very low bandwidth (128 Kbps) per client.

RFO is an offline call trace tool that requires adhoc bandwidth depending on the number of call
trace files downloaded and the frequency of download.

WPS is an offline configuration tool that requires adhoc bandwidth depending on the number of
workorders, cmXML import/export, etc is done on a daily basis by the user.

8.3.2 SERVER OF CLIENTS


This sections list the bandwidth requirements between the ICA Clients and the Server of Clients.
Bandwidth Requirements for ICA Client (Kbps) = 64 Kbps base + (64 Kbps * #_of_ICA_Client)
The low bandwidth requirements of the ICA client makes the Server of Clients an ideal "tool/option" to
access the WMS GUI from remote locations using the VPN remote access solution.
Please note that not all operations are supported with the Server of Clients such as patch and
software download. The value above provides the bandwidth that the Citrix Server of Clients will
utilize but does not consider software/patch download. Sufficient bandwidth must be provisioned for
the SW downloads duration to be acceptable. Between 512 and 1024 Kbps would provide acceptable
download speeds.

8.4.

BANDWIDTH REQUIREMENTS BETWEEN THE ROC AND


EXTERNAL OSS

Table below lists bandwidth requirements from the WMS Main Server to the Fault and Performance
OSS. Bandwidth requirements for Performance OSS are to transfer XML files in "gzip" format per
Network Element.

Network Element Type


NODE B
RNC
Managed Network
UTRAN NEs

Bandwidth Requirements between ROC and Performance OSS


per Managed NE (Kbps)
0.6
300
Bandwidth Requirements between ROC and Fault OSS (Mbps)
0.5

Table 56 : WMS Main Server to OSS Bandwidth Requirements

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

9.

107

SECURITY AND REMOTE ACCESS

Several security topics are covered throughout the WMS engineering guide. The intent of this security
section is to summarize these security topics and to cover other miscellaneous security related topics
related to WMS. This section will also point to other documents containing security information related
to WMS.

9.1.

OPERATING SYSTEM HARDENING

Operating system hardening scripts are included with the WMS software load. The OS Hardening
Scripts disable all services that are not used (and potentially unsafe) by the software application
residing on the server.
It is important from a security perspective to ensure that all WMS servers have been hardened.
Furthermore, some OS hardening features are optional. E.g.:
-

sendmail can be optionally turned off if the trouble ticketing option is not installed on the Primary
Main Server.

Graphical login support can be disabled on screenless servers. (i.e. X11). From a security
perspective, it is recommended that X11 be disabled on the WMS servers and to always use client
workstations/PCs to use any graphical application.

Oracle security can be increased on the Primary Main Server by enabling the Oracle valid node
checking feature. It is recommended to activate this feature.

Other daemons can be optionally turned off on all servers such as the router discovery daemon.
(i.e. in.rdisc).

It is recommended that all these optional OS hardening features be activated where possible.
Please see "WMS Security - NN-10300-031", for procedures and detailed information on operating
system hardening.

9.2.

AUDIT TRAIL

On the WMS servers, several events are logged as part of the "audit trail" functionality that is provided
through the Solaris SunShield Basic Security Module (BSM). Events audited include creating/deleting
users login/logout attempts, FTP sessions, switch user commands...
All logged events are stored locally in audit log files. A subset of these events are sent to the
centralized security audit login (CSAL) mechanism located on the WMS Main Server, some of the data
in the audit log files is sent to the logging service.
The CSAL system is a service in the Network Services Platform that manages the network audit log
files. You can use the CSAL system to view the data in the flat text files.
The Logging Service enables applications or system components to create and manage logs. It also
provides central access to security logs and events.
From a security perspective the audit log files as well as the CSAL data should be properly stored to
tape. The historical data archive & retrieval feature can be used to backup the audit log files.
To avoid filling up the disks to 100% with audit log files, there is an automatic purging mechanism in
place which deletes old audit log files when the disk capacity hits a certain threshold. Therefore, to
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

108

prevent the potential loss of audit log files, the archive and retrieval backup must be executed at the
appropriate frequency. It is recommended that the historical data archive be done daily.
From a security perspective the audit records files should be properly stored to tape. The historical
data archive & retrieval feature can be used to backup the audit records files.
Please see "WMS Security - NN-10300-031", Security Fundamentals and Implementing Security
Chapters for procedures and detailed information on audit trail.

9.3.

WMS ACCESS CONTROL

WMS allows to set different level of access control for the different users of the solution. It is possible
to restrict/allow the access to certain feature, tools and operations to specific groups of users.
It is recommended to create different role of users for every different user functions. E.g. one role for
directors with only viewing privileges, another group for operators for certain regions of the network,
another group for different NE types/domains (i.e. Access)... Each user is allowed to access the
resources that are belonged to his/her user group and are need to perform his/her functions. This is a
recommended security practice.
E.g. WMS is used to manage an end to end wireless network spanning an entire country. Different
OAM user group can be configured in order to have groups for different functions or different regions
of the network. Each group is configured to only have access to resources pertaining to his group.
Groups can be divided so that say a Supervisors group is allowed to view alarms whereas Technicians
group are allowed access to alarms and configuration tools, support group has access to the Network
elements to troubleshoot and so on.
Please see "WMS Security User Configuration and Management NN-10300-074", for procedures
and detailed information on access control.

9.4.

NE ACCESS CONTROL LISTS (ACL)

The WMS Servers communicate with the managed NEs using, in most cases, the management port
of these NE. It is good security practice to enable access control on these NEs to restrict connection
to the management port from only known hosts/networks that need to communicate to the
management port such as the WMS servers. It is recommended to utilize this security feature for all
NEs that support ACL for the management port.
Currently Network Elements based on Multi Service Switch 7400, 15000, and 20000 (previously
Passport) support this functionality.

9.5.

USER SECURITY ADMINISTRATION

Integrated Element Management System provides centralized authentication, administration, and


authorization. It is good security practice and strongly recommended to enable all security features
available with WMS.
Password modification
In WMS, there are several applications and system accounts protected by password authentication.
The WMS user authentication module includes, as part of the OAM user passwords validation, support
of basic password dictionary validation.
It is strongly advised to change all default passwords after initial installation of any WMS applications
and to change the passwords periodically.
Please see "WMS Security NN-10300-031", for procedures on managing user accounts and
passwords.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

109

Account lockout
WMS allows to lockout an account after a configurable number of failed log-in attempts.

9.6.

MULTISERVICE DATA MANAGER (IP DISCOVERY)

The IP Discovery tool is a tool that is part of Multiservice Data Manager which is available on the WMS
Main Server.
The IP Discovery tool is used to add/delete NEs from the NSP GUI. By default, no password
authentication is required to launch this tools aside the NSP access control restrictions that are
configured. This means that users with access to the MDM toolset will be able to launch and use the
IP Discovery tool. This might not be the desired behaviour.
It is recommended to enable password authentication of the IP Discovery tool. For related procedures,
please refer to the Nortel Multiservice Data Manager Fault Management tools - 241-6001-011.

9.7.

CENTRALIZED OAM USER MANAGEMENT

Centralized WMS User Administration reduces security vulnerabilities with an OAM user
authentication and authorization privileges and administration across the supported WMS applications.
All users, including NE users, are able to authenticate using the centralized WMS user directory. For
NE users, RADIUS authentication is used with the NEs (supported on the RNC, xCCM boards on the
NODE B) and the WMS embedded RADIUS server authenticates the access request towards the
WMS users directory.
Please see "Nortel Multiservice Data Manager SecurityFundamentals - NN10400-065", for
information about centralized OAM authentication and RADIUS.

9.8.

USER SESSION MANAGEMENT

This allows the operator to identify the list of currently active users logged and the ability for the
administrator to lock out / force out users.

9.9.

SSL

WMS supports X.509 certificates in Privacy Enhanced Mail (PEM) format for Secure Sockets Layer
(SSL). SSL secures the authentication channels of the network between the clients and the servers
(Primary and Secondary main servers).
WMS supports the maximum level of security with regards to key encryption methods allowed by any
Web browser (1024 bits).
Please see "WMS Security NN-10300-031", for procedures on enabling SSL on WMS.
Engineering Note: SSL
SSL does not secure all the network traffic between the client and the servers. It only secures the
authentication channels of the network between the clients and the servers. For a complete security of
the packets between client and server, IP Sec is recommended.
(Alcatel-Lucent Brick 150 can be used as secure VPN IP Sec Gateway).

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

110

9.10. RADIUS/IPSEC
As an option, the external authentication module provides authentication support of the WMS
operators to an external authentication module (provided by the customer) using the Radius protocol.
Support of IPSEC between WMS and NEs provides secured OAM traffic between the NEs and WMS.
The table below provides list of NEs that support Radius and/or IPSEC.
Protocol/NE

RNC

Node B 14

Node B
(Model: 939x)

MSS 74X0

Radius
IPSEC

X
X

X
-

X
X

Table 57 : NEs supporting RADIUS/IPSec


Engineering Recommendations: IPSec
For IPSec connections to RNC, Alcatel-Lucent recommends using Configuring IPSec with IKE.

IPSec connections for OAM communication are supported between the primary and secondary Main
servers and Multiservice Switch based network elements such as the RNC. IPSec provides
confidentiality and integrity of the OAM messaging, credentials are encrypted, and data origin
authentication is provided by a shared secret key.
The main objective for implementing IPSec is to protect ftp, telnet, FMIP, NTP, and RADIUS services.
ICMP and ftpdata are not encrypted since the data is not sensitive, so two bypass policies are created
for those two services. All other traffic is discarded.
Refer to the following restrictions, limitations, and supported configuration:
-

Multiservice Switch based network elements must be running PCR7.2 or newer


Encapsulated Security Payload (ESP) headers are used for encryption and authentication,
Authentication Header (AH) is not supported
IPSec is configured in transport mode, tunnel mode is not supported
HMAC-MD5 and HMAC-SHA1 authentication algorithms are supported
AES, DES and 3DES encryption algorithms are supported

IKE (Internet Key Exchange) is introduced in OAM5.1. It is a standardized protocol used to automate
IPSEC SAs deployment and Keys. IKE is based on a framework protocol called ISAKMP and
implements semantics from the Oakley key exchange, therefore IKE is also known as
ISAKMP/Oakley.
The IKE protocol has two phases:
- Phase1: to establishes a secure channel between the two peers,
- Phase2: to negotiate IPSEC SA through the secured channel establish in phase 1
Please see Alcatel Please see "WMS Security - NN-10300-031", for procedures and detailed
information on Radius and IPSEC configuration.

9.11. SOLARIS SECURE SHELL (SSH)


Solaris SSH provides a suite of tools that are secure alternative/replacement of traditional telnet, ftp,
rlogin, rsh commands. Operators and administrator should always use ssh utilities such as ssh, sftp,
scp for their administrative tasks.

14

: NODE B Models: 931x, 932x, 933x and 934x

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

111

This section provides some basic security considerations when using SSH.
All WMS servers are installed with Solaris SSH.
Protocol Version
WMS supports only SSHv2, as it is the most secure one.
Access Control
For added security, access allow/deny lists should be created to filter incoming connections to the ssh
daemon. This administrative task can be done by creating the following access allow/deny files (these
files should be owned and writable only by the root user):
"/etc/hosts.allow" and "/etc/hosts.deny"
The syntax of these file is as follows:
#> more /etc/hosts.allow
sshd: 50.0.0.0/255.0.0.0, 10.10.10.0/255.255.255.0
#> more /etc/hosts.deny
sshd:ALL
In the example above, the SSH daemon will only accept client connections from SSH clients on the
50.0.0.0 and 10.10.10.0 networks only. All other connection requests will be denied.

Logging
As a good security practice, it is recommended to use the syslog daemon to capture SSH daemon
logs and to monitor these logs periodically for any suspicious activities. To enable this feature, the
following operations need to be performed by an administrator on each of the WMS servers:
Edit the /etc/syslog.conf file to include the following entry:
auth.info <tab> /var/log/authlog (It is important to use a tab and not spaces!)
Also, the "/etc/ssh/sshd_config" file must contain the following lines:
SyslogFacility AUTH
LogLevel INFO

Engineering Note: Syslog


For the changes to be effective, the syslog daemon must be restarted. Also, this setting might be reset
after an upgrade. It is recommended to verify these settings after a software upgrade.

Sample log entry:


Jun 15 15:26:01 main sshd[10934]: [ID 800047 auth.info] Accepted password for root from
47.214.224.63 port 38555 ssh2
SSH X11 Forwarding
Solaris SSH has a functionality called X11 forwarding. This feature can be used by the operators or
administrators to securely transfer X11 traffic.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

112

E.g. Using a UNIX client, an administrator needs to launch an application on one of the WMS servers
and have the display of that application sent back to the UNIX Client.
It is possible to perform this operation using the X11 forwarding option of ssh by entering the following
command from the command prompt of the UNIX client:
UNIX Client#> ssh -X <Main Server IP@> <application>
<application> can be any application using a GUI such as xterm, xclock or other.

9.12. SNMP
Several components of WMS are managed using the SNMP protocol. See below for SNMP versions
supported:
MDM 16.1: SNMP V1, V2
SMC 3.6.1: SNMP V1, V2, V2 USEC entities (i.e. User-based Security Model)
Security Recommendation (Community Strings)
When the WMS servers are installed or when new devices are integrated to the Main Server, it
important to change the community strings to values other than the default value. Leaving the default
community strings unchanged on devices in a network can allow a malicious user to gain access to
sensitive information about the configuration of the devices in a network.

9.13. IP FILTERING
This feature implements IP Filters within Solaris 10. Support of IP Filters allows the WMS servers to
have an host based IP filtering capability within the server solution. This allows flexibility in
provisioning of desired firewall policy rules, as desired, to ensure that the WMS servers comply with
customer security firewall policies.
IP Filtering provides a first level of security directly on the server.
The integrated support of WMS specific IP Filter rules can also help additionally harden the WMS
server solutions deployed, by restricting the accessibility of any weak services that are still currently
used within the WMS solution, where applicable, as well as restricting the visibility of any local ports
and services used within WMS servers to external systems.
Support of Sun Solaris 10 IP Filter firewall allows the WMS servers to have a host based firewall within
the server solution. This allows flexibility in provisioning of desired firewall policy rules, as desired, to
ensure that the WMS servers comply with customer security firewall policies.
This feature provides a set of default rules for the IP Filter firewall to help further additionally harden
the WMS servers, by ensuring that any local ports used within the WMS servers are not visible to
external systems.
This feature also allows customers and/or Alcatel-Lucent personnel to make changes to IP Filter rules,
if and as desired, and ensure that any such changes are reflected after upgrades and installations
(over and above the default set of rules).

9.14. FIREWALL
As detailed in the Network Architecture section, it is recommended to deploy a VPN Router firewall
between the NOC and the ROC (i.e. between client and servers).
The recommended firewall is the VPN Firewall Brick 150. It is proposed as firewall between the OAM
LAN and the WAN access to the Network Elements and the access to another OAM ROC.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

113

If a Client PC on the network is accessed by multiple users or a machine gets its IP address
dynamically, it might be necessary to allow access to a particular user only and not to the machine
itself.
Hardware and software requirements for the Brick 150 are provided in WMS Solution Hardware
Specifications section of this document.

9.15. SECURITY FEATURES ON THE WMS DCN EQUIPMENT


This section provides an overview of the security features available on the various DCN components
recommended for the WMS DCN.
OmniSwitch 6850
For detailed information, please refer to http://www.alcatel-lucent.com.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

10.

114

NETWORK TIME SYNCHRONISATION

Proper time synchronization is useful and should be considered mandatory in order to adequately
support the requirements for accounting/billing CDR's, network and fault management, efficient
troubleshooting & support, security and audit logs, and performance counter correlation. Network Time
Protocol (NTP) is the main protocol used in Alcatel-Lucent Wireless OAM network to synchronize the
time of day (TOD) on servers and the NEs together.

10.1. ABOUT NTP FUNCTIONALITY


NTP is composed of servers and clients that exchange information about their system time. NTP is
based on client-server and master-slave architecture. The WMS implementation of NTP is in Unicast
mode (more accurate and secure than broadcast mode) where the NTP client actually initiates the
time information exchange. After a NTP client sends a request to the server, the server sends back a
time stamped response, along with information such as its accuracy and stratum (see below). The
NTP client receives the time response from a NTP server (or servers), and uses the information to
calibrate its clock. The NTP client determines how far its clock is off and slowly adjusts its time to line
up with that of the NTP servers. Adjustments are based on many time exchanges, and involve filtering
and weighting as defined in the protocol.
In order to increase accuracy, corrections are applied on the client side to eliminate skewing cause by
networking latency. The NTP client estimates travelling time and remote processing time once it
receives the packet from the NTP server. NTP algorithms assume that the one-way travelling
between the NTP server and client is half of the total round trip delay (minus remote processing time).
Given this assumption is not always 100% accurate, it is generally accepted that as the travel time to
and from the server increases, the probability of loss of accuracy increases.
In the context of NTP, the stratum is the number of NTP server levels relative to a reference clock that
is considered as the most accurate time for a given network. Stratum-1 is considered to be the most
accurate level (example, GPS type transceiver with NTP interface). Clients which synchronize on
stratum-1 servers are considered stratum-2. Some nodes (such as the WMS servers which are based
on Solaris) can offer both NTP client and server functionality. Client which use stratum-2 servers
become themselves stratum-3 and so on. The higher the stratum number is, the less accurate the
synchronization is considered.
Note that local time zone settings are outside of the scope of NTP. These settings must be set locally
on each device. See important considerations at the end of this chapter.

10.2. COMPATIBILITIES
NTP version 3 should be deployed as part of the UMTS OAM solution. NTP V3 (RFC 1305) is the
most popular version (and the default for most devices).
Implementation of NTP usage, within the UMTS network is straightforward since support for NTP
already exists on NEs and servers. This includes RNC. All Solaris based OAM servers also support
NTP such as WMS Main Servers, NPO server, Server of Clients and Unix Clients.

Engineering Note: NODE B synchronization


The UMTS NODE B synchronizes with the WMS Main Server through a proprietary protocol, so as
long as the main server is adequately synchronized, so will the NODE B.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

115

10.3. TIME SOURCE SELECTIONS


It is not in the scope of this to propose recommended vendors or types of Stratum 1 servers. We
would note however that considering the accuracy requirements discussed here that GPS type time
servers meet the requirements and appear simple to operate/maintain at acceptable cost levels.

10.4. REDUNDANCY AND RESILIENCY


Redundancy is essential in the NTP configuration. As mentioned above, the NTP clients (this includes
intermediary servers such as the WMS servers) should connect to at least two lower stratum NTP
servers (Primary Main Server or Secondary Main, stratum-1 GPS NTP if available...), and this number
can be increased to three or four.
The recommendation of having one (1) GPS type NTP server co-located in each ROC (with a
minimum total of two (2) available to the WMS main server) should be considered, specifically when
there is local legislation on the accuracy of the timestamps for billing.
Following the above recommendations will minimize brief connectivity outages.

10.5. DEFAULT BEHAVIOUR OF WMS MAIN SERVER UNDER


OUTAGE CONDITIONS
In the rare situation of outage of the main server time sources (Network Time Server (NTS) down, loss
of connectivity), the WMS Main Server will continue to distribute the time based on its own internal
clock which will still be corrected based on the trend (drift correction) that was established when it was
synchronized. This has been configured this way to ensure that all devices stay synchronized amongst
themselves for logging purposes (security and troubleshooting).
Some testing done shows that a Solaris server which was previously synchronized can drift by
typically 100 msec per day (with less typical values being around 400 msec/day).
This WMS main server default behaviour needs to be weighed against time accuracy requirements for
billing. Should the time synchronization requirements for the NEs involved in billing be tight (1 second
or less from national standards) it could be considered to change this behaviour. Alternate behaviour
under this situation would be for the main server to stop distributing time when it has lost contact to
lower level stratum synchronization sources. Under this situation the NE's will generate alarms for loss
of time synchronization.

10.6. RECOMMENDED NTP ARCHITECTURE


It is recommended to use the WMS Main server as the central point for distributing the time throughout
the network. The main advantage of using the Main server is that it must have connectivity to all the
NEs or EMSs managed by a ROC which require time synchronization.
To ensure accurate distribution, the Main Server should get the time from all the time sources
available in the overall wireless network (up to 3 or 4 if possible). This would limit the OAM traffic
between different operating centres as well as simplify firewall rule management, since only the WMS
Main Server would get the time of other servers outside of the NOC. Refer to figure below for the
recommended Time Synchronization architecture.
For redundancy reasons, each client (NE's, other OAM servers...) should also be configured to get the
time of an alternate source or even the Stratum 1 time source if it can be reached from the client. This
is normally possible in cases where the OAM reference architecture is not followed and the
OAM_OAM network is the same as the OAM_NE network.
In terms of reference OAM architecture, the NTP stratum-1 sources would normally sit on the Server
LAN (OAM_OAM network according to WMS documentation).
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

116

10.7. USING PUBLIC TIME SOURCES OVER THE INTERNET


There is no engineering requirement to have Internet connectivity from the OAM network and this is
something that would normally be avoided. Should an Internet Public source be used as a time
reference we would recommend building an intermediary stratum server somewhere off the OAM
network (bastion NTP server). A special focus should be done on the security of this server. As a
minimum, this intermediary server should have peering and remote configuration/monitoring disabled.
Standard NTP access control (ntp.conf file) should restrict all NTP communications to the servers
involved in the configuration (i.e. the NTP servers on the Internet side and the WMS main server if this
server is the main time distribution point). The optional key and encoding features of NTP could be
considered to authenticate the source. NTP time sources available on the Internet normally charge a
monthly fee for such services but would offer a guarantied level of accuracy. If guarantied accuracy
servers are not used, it would be recommended to use 3 or 4 sources off the internet so that this
intermediary time server can take advantage of the rich NTP algorithms to determine if some servers
are inaccurate or incorrect.
Firewalls should also be used and if these also allow flow control it should be assumed that the
maximum rate is 1 packet per minute. This could offer some protection from denial of service attacks.

Public time
Server
ROC

Other ROC

Internet

NTP

NTP

Primary Main
Server

Secondary
Main Server

Intermediary
NTP Server
(secured) Can
be used as
backup

Preferred NTP
Secondary NTP

NEs

Other OAM
servers

Figure 27 : Recommended Time Synchronization Architecture

10.8. NTP ACCURACY AND NETWORK DESIGN REQUIREMENTS


A key driver for synchronization accuracy requirement is billing. Typical billing accuracy requirements
may vary from one country to another; but are typically of +-1 second from national time standards.
It has been demonstrated that the upper bound of typical synchronization error achieved using NTP in
unicast mode is around 50 msec (or 25% of average round trip time in between servers/clients
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

117

accumulated to stratum-1 source). Note that this is the accuracy of the time at the System/OS level
(there can actually be some extra internal delays on a server or a NE in associating a time stamp on
an event).
This estimate is based on the fact that the actual OAM network design follows two standard
engineering guidelines which ensure optimal NTP accuracy: a) symmetrical transport times in between
the server and client b) avoid sustained congestion conditions.
Note on time accuracy convergence - after initially starting the NTP processes, it may take several
minutes, or even up to an hour to adjust a system's time to the ultimate degree of accuracy. To avoid
long stabilization times, one could do an initial manual adjustment to the local clock before starting the
NTP processes.

10.9. NTP RESOURCE USAGE CONSIDERATIONS


Considering the flow of fault and performance information on the OAM network, NTP communications
is negligible. Also, CPU resource consumption of NTP on Solaris servers is also negligible.
Requirements on local time zone settings
Local time zone settings are outside of the scope of NTP (Network Time Protocol). NTP only
synchronizes time at a lower level (similar to GMT or UTC). Regional time specificities such as time
zones and daylight savings time (DST) are normally set on each node, NE or server.
Wireless network nodes as well as the WMS components deal with time zone in different manner and
by sending different levels of information. Because of this, the following recommendations and
requirements have been made in order to simplify network management. Recall that network
management function is normally facilitated when time stamps related to events can easily be
correlated together.
Preferred Recommendations
The time settings on all NEs, OAM servers and clients must be identical (all set to UTC, or all set to a
single time zone)
Note that having the clients set to a different time zone than that of the OAM server can be a source of
unsupported issues relative to the accurate display of time in alarms and in reports.
Alternative proposal
The following alternative proposal takes into account important considerations which are required
when NEs have already been integrated into a billing system. In all cases, the CDR's time information
is always based on local NE time. Impacts on changing the time or time zone on the NEs need to be
adequately assessed. If the time or time zone is changed, this could require corrective measures at
the billing system level. Besides legal aspects of time stamp accuracy requirements in billing,
subscribers will require exact time stamps if they are used to list events on their bills. Billing
considerations could prevent following the above preferred recommendations above. In such a
situation, an alternative to the preferred recommendations above would be to keep some NEs set to
their actual time zones (i.e. actual local times are set on NEs spawning in multiple time zones).
This alternative proposal is not the preferred one from an operational point of view given that the
correlation of time related information will be more complex. When no impacts are identified to billing,
the preferred time zone recommendations (NEs and OAM system set to a single time zone) must be
followed.
The following recommendations and notes would apply to this alternative:
UMTS Access networks do not generate billing CDRs. Therefore, they must follow the preferred time
zone recommendations (homogeneous single time zone setting). Given that UMTS Access RNC and
NODE B deal with time offsets in very different manners, any deviation from this recommendation for
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

118

UMTS access network will create inconsistent time stamp information and will complicate network
management.
Engineering Note: daylight saving time
There is a known issue in respect to the daylight saving time change on the RNC. If a time
synchronization discrepancy materializes between the RNC on OAM, some alarms such as bad data
file related to counters could possibly be generated via WMS GUI due to a time stamp inconsistency.
Alarms must be cleared manually.

The WMS Client must be set to the same time zone as the OAM servers.
The consequences of choosing this alternate strategy is that correlation of time information of
nodes in different time zones will be not be as straightforward as when they are all in the same
time zone.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

11.

119

9359 NPO NETWORK PERFORMANCE


OPTIMISER

11.1. OVERVIEW
This section describes the NPO (Network Performance Optimizer), a performance management
reporting application for WMS.
NPO is an option that offers a full range of multi-standard QoS Monitoring and radio network
optimization facilities. It requires a dedicated server in the LAN including the installation of NPO client
application running on PC.

11.2. HIGH LEVEL OVERVIEW AND SOLUTION ARCHITECTURE


NPO retrieves the counters files available in the WMS Main Server and produces indicator
accordingly. In addition, the Network configuration (topology) is uploaded from the WMS Main server.
Two types of data types are used by NPO for the production of indicators: data information for
counters and Network topology and Meta data files for Indicator definition, Counter dictionary and
NPO parameter definitions.

PC Client

QOS configuration, Tuning, etc.. (CORBA services)


Administration (Web services)

Counters
files
File transfers
(FTP services)

WMS
Main servers

Network
Configuration
files

Meta data
definition

Oracle
NPO server

Figure 28 : NPO Architecture


Most of Meta data are not modified and are pre-defined by Alcatel Lucent according to system and
engineering definition. Additional indicator definition can also be configured in NPO.
NPO Application relies on JAVA applications with CORBA services and the administration is managed
though WEB HTTP connection.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

120

11.3. CAPACITY CONSIDERATIONS


The following table provides dimensioning indications with the number of user client session and
network size supported by the different types of NPO server. The most important parameter to deduce
the type of NPO server are the number of 3G cells and the number of UMTS Node B that must not
exceed 1/3 of the total of 3G cells.

NPO Packaging

Small

Medium
Large

Server Type

Number of
concurrent
users

Network Size in
Cells/Node B

Maximum
Number
RNC

1400/500

15

4500/1500

27

9000/3000

38

18000/6000

SUN Netra 240


(Legacy)
SUN Enterprise T5220
(Nominal)
SUN Fire V490-2CPU
(Legacy)
SUN Fire V490-4CPU
(Legacy)
SUN Enterprise M40002CPU (Nominal)

X-Large

15

2 x SUN Fire V490-4CPU


cluster (Legacy)
60

SUN Enterprise M40004CPU (Nominal)

Table 58 : NPO Server Packaging


Engineering Note: Restrictions with regards to feature 15 minutes period - counters reporting
In case of feature 15 minutes period counters reporting activated with NPO X-Large configuration
based on legacy cluster of 2 SFV490-4CPU, a maximum of 4500 Node B have to be considered
instead of 6000.

The complete description and characteristics of each server model is available in the hardware
specifications section 6.2

11.4. NPO CLUSTER


11.4.1 OVERVIEW
Engineering Note: Machine applicable to NPO cluster configuration
NPO cluster configuration is applicable to legacy machines SFV490 4 CPU only.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

121

NPO Cluster is introduced in NPO Release M2.3 (OAM 5.2) with one Master and up to one Slave
server. In M3.0 (OAM6.0), the cluster configuration allows maximum configuration of one Master and
two Slave servers. There is no cluster configuration in releases prior to M2.3.
NPO Cluster allows multiple NPO servers to group together in order to improve computing
performance and increase supported capacity of cells.
NPO cluster relies on Oracle Real Application Cluster (RAC) solution which allows multiple servers to
run the Oracle RDBMS software simultaneously while accessing a single database.

11.4.2 ARCHITECTURE
The NPO cluster is composed of a master node and one or more slave nodes. Only the master node
is communicating with NPO clients. Slave nodes are in charge of providing computing performances to
ensure file loading and data processing activities.
Between the NPO cluster nodes, the following flows are implemented:
- The clustering activity flow (Oracle RAC flow) that ensures that tasks or computations can be
spread over the various nodes,
- The file sharing activity flow (NFS and Rsync) that allows to share NPO files.
- The time synchronization flow (NTP) that is mandatory for clustering.
- IIOP (Internet Inter-Orb Protocol) flow used mainly for process monitoring and logging activities.
Figure below shows the implementation of the NPO Cluster in the UTRAN OAM solution.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

122

Floating IP@

NPO Clients

NPO Cluster

Virtual IP@
Private IP@
Virtu
a

Slave

IP@

Public IP@

Public IP@

Priv
ate

Slave
l IP@

Public IP@

Master

WMS Servers

WMS Primary Main


Server

WNMS Secondary Main


Server

Figure 29 : NPO Cluster Architecture


For more information on NPO Cluster network interfaces, please refer to section 7.6.
For more information on the NPO HMI server, please refer to document 9359 NPO HMI Server and
Client Installation, 3BK 17429 9088 RJZZA"

11.4.3 FAILURE SCENARIOS


In a NPO cluster, the following scenarios describe the consequences in case the Master or Slave NPO
server crash in the cluster.

Master node crashes


-

NPO Client applications can not connect anymore.


Scheduled reports (locally stored) are lost.
If consolidation was on-going (can run on any node), it is interrupted.
Services associated to generic loaders are automatically launched on the slave node

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

123

In this case, a failover procedure can be applied so that the slave node is re-declared as the master
node and the LDAP database (used for the list of the users and their access rights) is automatically
exported to their new master node. The slave node detects the failure and takes over the floating IP
address thus not impacting access for the NPO Clients.
Once the Master server is recovered, consolidation of data restarts with minimum loss of data and the
server runs with degraded performance till all the data is recovered and consolidated.
Slave node crashes
-

If consolidation was on-going (can run on any node), it is interrupted.

Once the Slave server is recovered, consolidation of data restarts with minimum loss of data and the
server runs with degraded performance till all the data is recovered and consolidated.
Backup and Restore
In case of cluster configuration, the backup and restore operations can be performed only with a
centralized backup and restore solution (e.g.: LEGATO) and must be done only on the master node.
OSB (Oracle Secure Backup) backup and restore on cluster configuration is not supported.

11.5. QOS MANAGEMENT


11.5.1 NOMINAL MODE
NPO can support different QoS granularity periods depending of NE capabilities to provide raw data.
The GPO General Permanent Observation period supported by the Network Element, such as the
frequencies of QOS file availability with WMS is described in section 3.6.8.1. (e.g.: 15 minutes, 30
minutes)
Once observation files are available within the WMS data source, it is automatically managed by NPO.
The PM loading is such done continuously and this is the nominal and regular activity of PM data
processing.

11.5.2 RECOVERY MODE


In case of an exceptional outage or anomaly with NPO (e.g.: link cut, server unavailability) during a
certain period, a large number of QOS files may be expected and waiting for being processed by NPO
at the next establishment of the system. In such condition a period of recovery is observed to enable
NPO to reach its nominal mode. The period of recovery depends of several conditions that are the
period of outage, the quantity of observation files (such as the number of network element and the
granularity), the type of the machines, etc...
In case of manual file management with WMS (feature 33282 - Manual collection and mediation of
counters files) due to missing periods observed in NPO (e.g.: rare condition where the network
element has not send a file, or a file corruption itself, etc...), the files can be managed by NPO within a
maximum period of 3 days in the past.
Engineering Note: Maximum previous day for recovery
The maximum capability of managing outdated file by NPO in the recovery mode is limited to 3 days in
the past (including the current day).
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

124

11.6. TOPOLOGY GRANULARITIES


The operational configuration is uploaded periodically once per day at midnight. The configuration can
also be manually updated on user request.

11.7. NPO PURGE FUNCTIONALITY


The NPO purge policy enables to maintain a large amount of data files in the past:
The NPO purge policy is the following:
- Monthly indicators are preserved during 15 months,
- Daily and Weekly indicators are preserved during 400 days,
- Raw-Hourly and 1/4h indicators are preserved during 21 days

11.8. CONSIDERATION FOR BACKUP AND RESTORE


11.8.1 SCOPE
Two levels of NPO data are considered as part of the NPO backup and restore: NPO essential Data
(ORACLE and LDAP data) and NPO System Data (NPO System files). The NPO system data
concerns the NPO application it self, such as the complete NPO application image that can be
restored without requesting the re-installation of the NPO application.
The NPO essential Data (ORACLE and LDAP data) are managed through the Oracle Recovery
Manager RMAN. This utility provides database backup, restoration and recovery capabilities,
addressing high-availability and disaster-recovery concerns.
NPO supports two types of methods for the backup and restore: Local Tape Backup & Restore using
Oracle Secure Backup Express (OSB), and Centralized Backup & Restore.

Local Backup
OSB to tape
drives

NPO Essential Data


(ORACLE LDAP)
NPO System Data

Centralized
Backup
infrastructure

OS

Figure 30 : NPO Backup and restore overview

Local Tape Backup and Restore OSB (Oracle secure Backup) is applicable to tape drives
support: This solution is recommended for small and medium network (not applicable to NPO in
cluster configuration) and is applicable to the NPO essential Data only.

Centralized Backup & Restore: This solution covers the two levels of data and is used to interact
with any backup and restore 3rd party infrastructure. The purpose of the centralized methods is to
provide generic interfaces to be used by any 3rd parties: The Third party agent interacts on one

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

125

hand with the Media Manager for the management of the ORACLE and LDAP database and on
the other hand for the management of the system data separately based on the system
catalogue 15 description. (e.g. : usually though a dedicated policy in the 3rd party engine)
The solution already supports LEGATO 16 centralized solution.
For other 3rd party (e.g.: HP Data Protector or VERITAS), Media Manager needs to be configured
accordingly.

NPO

NPO Essential Data


(ORACLE LDAP)

Centralized
Backup
infrastructure

MM
3rd party
Agent

3rd party
Server

NPO System Data

Figure 31 : Centralized Backup & Restore architecture

Engineering Note: Data Size considerations


A historical of six months of essential data is a good indicator (as per the hourly and indicators and
considering the historical purge mechanism), to provide data size estimations. After 6 months of data
production:
- 700 GB are required for a network with 12 000 cells 3000 BTS and 10 RNC,
- 1.2 TB required for a network with: 18 000 cells, 6000 BTS and 20 RNC,
Additionally, the system data backup takes about 20 GB of disk space. This figure must be multiplied
with the number of nodes, in case of cluster configuration.

11.8.2 POLICY
A backup and restore policy consist in the production of the best NPO image in order to restore the
system in any case of disaster scenarios and in the best timing delay. In case of Oracle database
crash or anomaly, the restoration of the NPO essential data is enough. In case of software crash, the
complete NPO image (essential and system) becomes useful to avoid the re-installation of the whole
NPO application.
Engineering recommendation: Operational Effectiveness
It is recommended to use the centralized backup solution to allow quicker recovery of the NPO
system. By considering all the NPO data including the System part, this prevents the re-installation of
the NPO application by restoring the system and essential data on the machine.

15

: The catalogue describing the backup of the NPO system is described in 9953 MP / NPO Platform
Administration Guide - 3BK 21315 AAAA PCZZA
16
: If LEGATO is chosen; LEGATO agent can be installed and configured as part of the NPO
installation procedure. The system data is not part of the default installation procedure. It still needs to
be configured based on the system catalogue description. (Refer to 9953 MP / NPO Platform
Administration Guide - 3BK 21315 AAAA PCZZA).
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

126

However for more critical disaster scenario such as server or OS (operating system) crash, the full reinstallation of the Unix OS and NPO application becomes necessary followed by the restoration of the
Oracle and LDAP databases.
When performing a backup, there is no interruption to NPO. Backups can be performed manually or
scheduled for automatic execution. Backup/Restore of Oracle database is performed by the RMAN
(Recovery Manager) utility. In case of a successful backup, RMAN clears the archive log which avoids
the filling up of disk space.
The backup can be launched with the incremental mode or with the full mode. The Oracle database
runs in the ARCHIVELOG Mode in order to allow backup of data. In this mode, the Oracle database
constantly produces archive logs, which are needed for online backups and point-in-time recoveries.

Engineering recommendation: Backup Schedules


Alcatel-Lucent recommends scheduled backups with :
a. full backup performed once a week on Sunday at 10:00
b. incremental backup performed daily at 20:00

11.8.3 LOCAL BACKUP AND RESTORE OSB


The following table summarizes the scope of application of the OSB method:

NPO Essential Data


(ORACLE LDAP)

DAT72 SDLT600
LT04H
SDLT600
SDLT600
LT04H
LT04H

NETRA 240- 2CPU


SE T5220
Server configuration

NPO System Data

SFV490 2CPU
SFV490 4CPU
SE M4000 2CPU
SE M4000 4CPU

Cluster (n*SFV490 4CPU)


Table 59 : Local Backup & restore OSB Scope of usage

The following table provides information of tape drive supported and throughput:

SDLT type

Data size ,Transfer Rate

DAT 72

36 GB , 3 MB/s

SDLT 600
LTO4H

300 GB , 36 MB/s
800 GB, 120 MB/s

Table 60 : NPO Server Tape Drive Throughput


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

127

Engineering Note: Restrictions


-

This method performs the backup and restore of the NPO Oracle Database only which includes
the LDAP user management database.
DAT 72 and SDLT600 and LTO4H are the tape drives supported by OSB method. Other types of
Tapes drives are not supported.
OSB backup and restore on cluster configuration is not supported.

Engineering Note: Capacity


For OSB, a backup cycle (a full backup and all following incremental backups until the next full
backup) is supported only for a single tape.
To backup more than one backup cycle, centralized backup solution is recommended. (Multi Tape
mode is not supported)
As a consequence, Local Backup solution trough Tape drives is not recommended for large
MS-NPO. Local backup to disk, or a Centralized Backup & Restore solution with partner (e.g.:
LEGATO, or VERITAS infrastructure) has to be considered.

For more details on the tape drive equipments proposed and the compatibility with server and domain,
please refer to backup and restore section.
For more information on the procedure to perform backup and restore of MS-NPO servers, please
refer to document 9953 MP / MS-NPO Platform Administration Guide - 3BK 21315 AAAA PCZZA

11.8.4 CENTRALIZED BACKUP AND RESTORE SOLUTION.


The following table summarizes the scope of application of the centralized method:

NPO Essential Data


NPO System Data

(ORACLE LDAP)

Other 3rd party


Legato
YES
YES
YES
YES
YES
YES

NETRA 240- 2CPU


Server configuration

SE T5220
SFV490 2CPU
SFV490 4CPU
SE M4000 2CPU
SE M4000 4CPU

Cluster (n*SFV490
4CPU)

(though Open MM- Media


Manager)

YES

YES
YES
YES
YES
YES
YES

Legato
YES
YES
YES
YES
YES
YES

Other 3rd
party
YES
YES
YES
YES
YES
YES

YES

Table 61: Centralized Backup & Restore Scope of usage


UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

128

Engineering Note: Restrictions


The centralised B&R system against a NPO in cluster configuration does not cover the NPO system
data.

Engineering Note: Interface and bandwidth


For centralised B&R system, a dedicated interface is required for backup and restore of data from the
centralized solution. This is recommended for bandwidth consideration and can be done through the
dedicated OAM-BR interface. (Refer to the 9953 MP / NPO Platform Administration Guide - 3BK
21315 AAAA PCZZA for the configuration of a dedicated Gb/s Network interface for Backup and
restore flows)
In case of cluster configuration, the backup and restore operations (to the external B&R infrastructure)
applies to the B&R interface of the master node only.

Engineering Note: Centralized Backup and Restore using Media Manager (MM) to support
integration to other centralized backup solutions such as HP Data Protector.
The MM must be configured and required expertise and support relative to the 3rd party. The catalogue
describing the complete list of system data to be backuped, and the integration points with MM are
described in 9953 MP / NPO Platform Administration Guide - 3BK 21315 AAAA PCZZA.

For more details on the procedure to perform backup and restore of NPO servers, please refer to
document 9953 MP / NPO Platform Administration Guide - 3BK 21315 AAAA PCZZA

11.9. NPO CLIENT REQUIREMENTS


NPO Client is a Java based tool and runs on a Windows based environment. Client Hardware
requirements for NPO are described in section 6.3.2.2.

11.10. ROUTING SWITCH AND BANDWIDTH REQUIREMENTS


The NPO is a "Network" server. It communicates with WMS servers (data source), NPO Clients and
Backup and Restore Infrastructure.
The network segmentation architecture is defined in chapter Network Architecture.
Engineering Recommendation: Routing Switch and bandwidth considerations within the LAN
The OAM servers including NPO must be located within the same Ethernet LAN that operates at
Giga bits Ethernet and the 1000 Mbps capabilities should be extended to all the Routing Switch
of the LAN.
The Minimum Throughput Requirement per NPO Client is 1 Mbps. For network constraints in term
of bandwidth capabilities, a SOC (server of client) solution should be considered.
To comply with the throughput requirement, every client should be connected to an Ethernet switch
through a 100/1000Mbps connection.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

129

For MS-NPO, the server can be deployed in a different data centre from the WMS server. In such
condition, it communicates through an ATM/WAN interfaces to remote OAM networks.
The minimum throughput requirements with regards to data transfer between peers through a WAN
with IP routers must be calculated according the volume of data and the GPO (General Permanent
Observation) period.
Please refer to annexes section 16.1 to determine the minimum throughput requirements for the
transfer of Observation files through a WAN.

Engineering Note: Interface and bandwidth with centralized Backup and Restore Infrastructure
For large configuration using a centralized Backup and Restore Infrastructure it is highly
recommended to use a dedicated Giga bits Ethernet link on the second interface (OAM-BR).
(Refer to the 9953 MP / MS-NPO Platform Administration Guide - 3BK 21315 AAAA PCZZA for the
configuration of a dedicated Gb/s Network interface for Backup and restore flows)

11.11. EXTERNAL INTERFACE CONSIDERATION


The NPO (or MS-NPO) contains an SNMP agent that can send traps to an external OSS using an
SNMP manager. The features that can generate SNMP traps are QOS alerter and PMON (The
Process Monitoring solution of NPO/MS-NPO).
The default destination port (configurable) on OSS is 162.
For the Integration of NPO (or MS-NPO) with an SNMP manager including details on traps content
and MIB structure, please refer to the UMT/DCL/DD/025298 MUSE M3 EIS - Averter SNMP
Interface.

11.12. NPO PERFORMANCE CONSIDERATIONS


This section gives general information regarding NPO performance mainly in the form of key
performance indicators on the data management aspects. (Timing responses aspects are not treated
in this document).
All of the key performance indicators (KPIs) correspond to a fully loaded SFV490-4CPU server (as per
the capacity tables in this chapter) importing data at the smallest granularity supported by the NE and
with the maximum number of open clients.
The test conditions also follow the NPO Bandwidth requirements described in previous section with an
OAM LAN characteristic at 1 Gbps and the NPO PC Client hardware recommendations in section
6.3.2.2.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

130

KPI

Target or Typical behaviour

Normal mode data importation of


one
granularity
period
of
performance data.

< 20 Minutes (Average)

Availability of imported data in


reports. This is measured from the
time the performance data file (XML
file) is available on the WMS server
disk to the time at which the data
can be displayed in a report/View.

< 15 Minutes (Average)

Overnight activity: Full Day Reimport. Every night NPO reexamines all the performance data
from the previous day and imports
any missing counters. This KPI is
defined assuming less then 25% of
data from the previous day was
missing.
Catch up mode: after an outage,
rate at which NPO for WMS can
recuperate.

< 4 hours

Maximum : 25 minutes

Less than 1:1


NPO takes Less then 1 period (e.g. : one
hour for one hour of outage, one day for one
day of outage, etc..), to recuperate.

Table 62 : NPO key performance indicators

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

12.

131

MS PORTAL

The dimensioning model and capacity figures provided in this section give the hardware models where
the Multi Standard product will be qualified.
Engineering Note: Commons rules for 9959 MS-NPO & 9359 NPO
9959 MS-NPO is the performance reporting application for a customer mixed 2G/3G radio-access
network. 9359 NPO solution is proposed for customer willing to manage 3G networks only.
9959 MS-NPO and 9359 NPO rely on the same product architecture. The previous section gives more
details of the solution architecture.
Except rules applied within a multi standard context, the engineering rules and considerations
described in previous section for 9359 NPO apply to 9959 MS-NPO.

12.1. OVERVIEW
MS-Portal is a Multi Standard OAM Portal able to manage networks of the same or different
technologies (2G/3G). It is composed of the following optional software applications running on the
platform (based on SUN server):
-

9953 MS-SUP server offering a common supervision window for the 2G and 3G alarms
9959 MS-NPO server offering a common follow-up of the QoS (counters, indicators, reports) with
checks, diagnosis and tuning facilities.

MS Portal client

MS-Portal
MS-SUP

MS-NPO

2G OMC-R

3G OMC-R

(A1353-RA)

(9353 WMS)

UMTS

GSM
Figure 32 : MS-PORTAL architecture
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

132

The MS-Portal can be made up of only MS-SUP, or only MS-NPO application.


Engineering Note: Co-residency of application not supported
The MS-SUP and MS-NPO applications can not reside on a same Hardware platform.

12.2. DIMENSIONING MODEL


The dimensioning of MS-PORTAL to the appropriate hardware mainly depends on the network
capacity in term of maximum reference cells.

Engineering Rule: Capacity model for MS-NPO in a multi standard 2G/3G scenario
The Total Number of Reference Cell to determine the right MS-NPO model is equal
as follow :
[0.75* nb of 2G cells + 1 * nb of 3G cells]
The maximum of OMC server is limited to 5. (Up to 5 data sources can be configured
within a MS-NPO). The ROC in 3G may contain two OMC or 2 data sources. Therefore,
when considering a ROC configuration, the total capacity should be considered when
choosing the right MS-NPO model.

12.2.1 REFERENCE CELLS


The 2G and 3G cells have the same weight for MS-SUP.
But for MS-NPO, the 3G cell has a higher weight than 2G cell due to the higher number of counters
available for 3G cells. We define the so called reference cell for MS-NPO with the following formula:

2G cell = 0.75 ref cell


3G cell = 1 ref cell
.

12.2.2 MAXIMUM OF DATA SOURCE SUPPORTED


In addition to the capacity limitation in term of maximum number of reference cells, the
maximum of OMC server (or data source) is limited to 5.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

133

12.3. CAPACITY CONSIDERATIONS


The following tables provide the different MS-Portal configurations.
The complete description and characteristics of each server model is available in the hardware
specifications section 6.

Network Capacity
Hardware

SUN NETRA
T5440 - 2CPU

Maximum Number
of Users

Maximum Number of
Cells (2G+3G)

54

27000

32

16000

(12 x 146 GB
Internal Disk Drives)

SUN
ENTERPRISE
T5220 - 1CPU
(8 x 146 GB Internal
Disk Drives)

Table 63 : MS-SUP Server Capacity

Network Capacity

Hardware
SUN
ENTERPRISE
M4000 - 4CPU
(2 x 146 GB
Internal Disk
Drives)
SUN
ENTERPRISE
M4000 - 2CPU
(2 x 146 GB
Internal Disk
Drives)

17
18

Maximum Maximum Maximum


Number
Number
Number
of users
of 2G
of 3G
cells
cells
only
only
38

24000

18000

27

12000

9000

Maximum
Number
of 3G
Node B 17

6000

18

3000

Maximum
Number
RNC

Maximum
Number
of
"reference
cells"

60

18000

15

9000

: Applicable in case of configuration with 3G-Cell Only or Multi-Standard 2G+3G-Cells


: Note that up to 3000 Node B per data source can be supported by MS NPO.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

SUN
ENTERPRISE
T5220 - 1CPU
(8 x 146 GB
Internal Disk
Drives)

134

1860

1400

500

1400

Table 64 : MS MS-NPO Server Capacity

12.4. BANDWIDTH AND CONNECTIVITY REQUIREMENTS


In the context of Multi Standard, MS-Portal server is connected to several Data-Centres that can be far
from each other including different transport characteristics.

Engineering Recommendation: Optimizing bandwidth in a context of Multi ROC


The principal recommendation is to deploy the MS-PORTAL server within the Data centre that has the
most important number of reference cells. The MS Portal should be co localize and in the same LAN
to the highest OMC server in order to optimize the bandwidth and connectivity usage.

Engineering Recommendation: Routing Switch and bandwidth considerations through the WAN
MS-NPO server can be deployed in a different data centre from the WMS server. It such condition, it
communicates through an ATM/WAN interfaces to remote OAM networks.
The minimum throughput requirements with regards to data transfer between peers through a
WAN with IP routers must be calculated according the volume of data and the GPO (General
Permanent Observation) period.
Please refer to annexes section 16.1 to determine the minimum throughput requirements for the
transfer of Observation files through a WAN

12.5. HMI SERVER CONFIGURATION


MS-Portal supports client access via a HMI (Human Machine Interface) on a Windows server running
Citrix software to extend its display to any workstation (referred to as Citrix ICA Client). This allows
lower use of bandwidth over the client network and reduces the need to purchase multiple hardware
clients with nominal configuration (i.e. the customers can continue to use their existing workstations if
they are at a lower specification than nominal as long as they can support the Citrix client).
At the Citrix client level a variety of OS systems can be supported (see Citrix documentation for
compatibilities). No other MS-Portal software needs to be installed on the Citrix client (besides the
Citrix client software) and therefore the Citrix client can be used for other means also without complex
co-residency issues.
The MS-Portal Client is installed on the HMI along with the Citrix software. A thin Citrix Client is
installed on all client workstations.
When launching the MS-Portal Client from the Citrix clients, the ICA protocol extends only the display
updates, keystrokes and mouse clicks to the Citrix Client while running the MS-Portal Client
applications fully on the HMI.
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

135

The following table provides the recommended hardware (with example of hardware type) and the
supported number of simultaneous MS-Portal users that can be supported on the given hardware. In
order to support more than the users in the table, multiple HMI servers need to be deployed in a Citrix
farm.
Type

HMI Server
Server name
Operating system
Applications

HP ProLiant DL320 G6
Windows 2003 Server Enterprise Edition SP2
Citrix Presentation Server 4.5 Enterprise Edition
Microsoft Office

CPU

1 x 2 GHz (quad core)

RAM

12 GB

Ethernet Ports

2 x 1 Gigabit/s Ethernet Interface

Disks

1 disk of 160 GB

Max. number of users

10 users

Table 65 : MS-PORTAL HMI Server Hardware Configuration and Capacity


Engineering Note: HMI Server
Engineering Note HMI server for MS-Portal is optional.
Engineering Note - Windows Server 2003 is no longer orderable from Microsoft. It will be required to
obtain a Windows 2008 Enterprise Edition license key and either downgrade to Windows Server 2003
Enterprise Edition or contact Microsoft license support directly on the following numbers in the link:
http://www.microsoft.com/licensing/resources/vol/numbers.mspx

Engineering Rule: Interface and Bandwidth


-

It is recommended to have a minimum of 1 Mbps per user bandwidth between the HMI and the
MS-Portal servers.
It is recommended to have a minimum of 256 Kbps per user bandwidth between HMI Citrix Server
and Citrix client
If the HMI clients network must be separated from the MS-Portal server network, two Ethernet
interfaces can be used.

For more details on the procedure to install and setup a HMI server, please refer to document
Install 9753 OMC, 9953 MS-SUP,NPO HMI Server and Client Using Citrix 4.5 - 3BK 17436 4022
RJZZA

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

136

12.6. OTHER CONSIDERATIONS


Other engineering details with regards to 9353 NPO are applicable to the solution 9959 MS-NPO.
Please refer to the previous section 11 9359 NPO Network Performance Optimizer

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

13.

137

W-CDMA QUALITY ANALYSER (WQA)

This section describes the WQA product for WMS.


This product allows the post processing of the UTRAN neighbouring tuning feature.

13.1. HIGH LEVEL OVERVIEW AND SOLUTION ARCHITECTURE


The WQA product is designed to provide ability to reflect graphically CTn information issued from
UTRAN networks.
The UTRAN Neighbouring feature allows building of proposals for optimizing the neighbouring
declaration based on dedicated UTRAN information flow (CTn).

CTn Activation
UTRAN

XML Files
XML
Storage
Buffer

ADI

WQA
3

Matrices computation
(Cell 1 3,50

100,0
90,0
80,0 c
70,0 u
60,0 m
ul
50,0
at
40,0 iv
30,0 e
20,0
10,0
0,00

3,00

Performance Mgmt

p 2,50
erin
c te 2,00
e rf 1,50
nter
1,00
0,50
-

0,00
-

C/I

Configuration
Management
CM XML

WMS

Figure 33 : WQA Architecture


The CTn data is collected from the UTRAN and mediated to XML file in WMS. The CTn data is not
displayed at WMS level but in the neighbouring tuning product (WQA).
WQA is deployed in a ROC configuration but it is not hosted by WMS servers and requires a
dedicated hardware. WQA is connected to the WMS through a LAN.
WQA main functional component are:
-

A data collection and transformation layer responsible for collecting the XML trace files and
populating the information into a database.
A database based on Oracle.
A 3rd party reporting interface which runs reports and delivers them to the clients over a web
interface.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

138

In terms of architecture, an instance of the WQA application is hosted on a Windows based server. A
single instance of WQA can support multiple users. WQA users access the WQA GUIs and reporting
using separate Client platforms (PCs).

13.2. WQA CLIENT SPECIFICATIONS


The supported client platform is a PC running a Windows Operating system which supports Windows
Internet Explorer 6.x. Optimal performance will be obtained if there are sufficient resources on the
client to support WQA and Excel (main requirement:100 MB free should be available before starting
up WQA and Excel).
With regards to Operating System, WQA client supports Windows XP in OAM 6.0

13.3. CONSIDERATION FOR BACKUP AND RESTORE


The WQA product makes daily and weekly backup files of its database and save these on its local disk
(D:). The administrator can choose to copy these to another media. Users/administrator might also
want to save the downloaded trace files (they are purged automatically when 85% of the DB is filled
up). Options which can be considered are an external server, an external storage device, or the use of
a local storage device such as tape device, a RW compatible CD or DVD drive additional disks (a
separate drive could be dedicated to this on the WQA server).

WMS

GLOBAL data model

Backup & Restore

Temporary
download
tables

Typical Case :
corruption of DB
FT
DB dump schedulable
Weekly & Daily

ETL XML
Uploading

Weekly full
DB dump

XML files should be


archived by system
administrator

Storage/load

ETL
processing

Reports
DataModel

Reports
Pre
calculation

XML
files

Archival

Rolap DataModel

Storage on tape (O)


By system
administrator

Daily
incremental
DB dump

Storage on tape
(M-weekly, O-Daily)
By system
administrator

Figure 34 : WQA Backup & Restore


There is no system backup procedure. The WQA will be reinstalled and the data restored in case of
crash of the system.

13.4. CAPACITY CONSIDERATIONS


-

Up to 10 web concurrent clients are supported on the WQA.


Up to 4000 cells are monitored at once.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

139

For CTn engineering consideration, please refer to UTRAN Call Trace section of the WMS Main
server Engineering Considerations chapter.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

14.

140

RADIO FREQUENCY OPTIMISER (RFO)

14.1. OVERVIEW
The Alcatel-Lucent RFO is a standalone PC application that replaces the NIMS PrOptima Call Trace
functionality in OAM06. It provides an excellent tool for the examination of RF links between mobile
equipment and the Radio network.
RFO manages this by post-processing call traces on the radio network by decoding and analysing the
following data:
-

Call Traces collected at RNC through WMS (xml)


CTa, CTb, CTg, CTn, OTCell traces
In addition, UMTS RFO is able to decode CTn traces.

By being standalone, the user can work without being connected to the WMS and the radio network.
To learn more about the RFO, please refer to the document Alcatel-Lucent UMTS RFO Product User
Manual - NN-20500-181.

14.2. RFO SOLUTION PROCEDURE


Following is the usual user operator procedure to analyse call trace:
-

Call trace data is generated by the WMS using the Call Trace Wizard. Call Trace cannot be
initiated by the RFO.

Once the Call trace data is generated, it is stored locally on the WMS Main Server.

The user can manually transfer the call trace data which are several files or directories and import
them to the RFO PCs hard disk. Note that sufficient bandwidth (as mentioned in hardware
requirements) is available to allow quick transfer of data from WMS server to RFO.

Once imported, the RFO processes these physical files and converts them to logical files. It also
simultaneously parses the physical files decoding each supported message and stores it in the
SQL database which is part of the RFO software.

Physical files imported from WMS servers can be deleted as RFO uses the logical files and data in
its SQL database to analyse data

14.3. HARDWARE REQUIREMENTS


RFO is a standalone application that is supported on Windows PC hardware. The recommended PC
hardware specifications are as follows:
-

Dual core Intel CPU at 1.86 GHz


2 GB RAM
20 GB hard disk space (call trace files can be large depending on the usage of call trace over
several days)
100/1000 Mbps Ethernet connectivity card is recommended
Windows XP SP2 (user should have administrator rights)

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

15.

141

5620 NM

This section describes the 5620 Network Management product suite which manages the AlcatelLucent 7670 Routing Switch Platform (RSP) and 7670 Edge Service Extender (ESE) and replaces the
Passports 7k, 15k in ALUs UMTS network.
The Alcatel-Lucent 5620 Network Manager (NM) is a reliable and scalable network management
solution. It provides network operators with a full range of capabilities on multi-technology networks.
Traffic and service parameters on frame relay, ATM, X25, SONET/SDH and ISDN links and paths can
be configured through a point-and-click graphical user interface. It allows multiple operators to
simultaneously access the same system, and thus manage the network from different sites.
The different components of 5620 to manage the 7670 switch are described below.

15.1. 5620 NM DATABASE NETWORKSTATION


A Database NetworkStation runs on a Sun server and contains a Database of network objects. There
are two types of Database NetworkStation in a network.
-

Standalone NetworkStation
Redundant Pair Database NetworkStation

The Standalone Database NetworkStation is as its name implies; a Standalone platform. With this
type of management, there is no database redundancy.
The Redundant Pair Database NetworkStations have a synchronised database that is maintained
between the Active and Standby database platforms. The role of the Active Database is to manage
and control the network and network management elements by maintaining an up to date database.
The role of the Standby Database is to constantly access the active database and ensure that its own
database is identical to that of the active database. Part of the redundancy features are that the Active
and Standby Database NetworkStations constantly check their network connectivity and visibility, by
verifying that the Active Database platform is communicating to more than 50% of the network at any
given time. If this situation changes and the Standby communicates with more than the Active and
within a specified time period, then an activity switch will occur between the two stations.
Both types cannot operate at the same time in the same network domain.
The 5620 NM Release 8.0 runs on Solaris 10 and uses Informix as its database system.

15.2. 5620 OPERATOR SERVER NETWORKSTATION


An Operator Server NetworkStation (also referred to as Delegate server) runs on a Sun workstation or
server that has access to the database on the 5620 NM Database NetworkStation.
Operator Server NetworkStations can be installed on the same hardware as the Database
NetworkStation or on a separate Sun server located remotely depending on capacity of nodes to be
managed.
Operator Server NetworkStations provide additional operator access points to the network by
supporting up to a combination of 255 concurrent GUI sessions on Operator Positions and Operator
Server NetworkStations.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

142

15.3. CPSS ROUTER NETWORKSTATION


CPSS (Control Packet Switching System) Router NetworkStations route CPSS traffic in large,
multidomain networks or in networks configured for distributed statistics collection. In general, CPSS
Router NetworkStations are used when the network has more than more than 24 CPSS links. CPSS
Router NetworkStations off-load the task of routing CPSS traffic from 5620 NM Database
NetworkStations.

15.4. STATISTICS COLLECTOR NETWORKSTATION


As the name suggests, this NetworkStation collects statistics from the 7670 nodes. It can be installed
in one of the following configurations:
-

Integrated (default)
Distributed

To collect statistics for up to 100 000 paths, install the Integrated Statistics Collector. In an integrated
configuration, the Aggregator and Collector software are installed on a 5620 NM Database
NetworkStation.
In a distributed configuration, the Collector/Aggregator and Collector NetworkStations run as separate
products from the 5620 NM.

15.5. 5620 CMIP/CORBA SERVER


The Alcatel-Lucent 5620 Common Management Information Protocol (CMIP) module and 5620
Common Object Request Broker Architecture (CORBA) OSS Interface module extend the AlcatelLucent management capabilities to other network-service and business-level OSSs (Operations
Support Systems).
The 5620 CMIP/CORBA supports a subset of the functionality of the 5620 NM. In general, the 5620
CMIP/CORBA are compatible with the same network management software packages and
transmission equipment as the 5620 NM.
The 5620 CMIP/CORBA OSS Interface Gateway software is installed on a Sun workstation, known as
the Gateway Station. This OSS Interface Gateway connects the OSS to the Sun workstation that runs
the 5620 NM. The OSS accesses the 5620 CMIP/CORBA MIB, which stores network objects that map
from the database of the 5620 NM.
Engineering Note: 5620 CMIP/CORBA OSS Interface
The OSS Interface CMIP/CORBA Gateway station has to be standalone and cannot co-reside with
any other 5620 NM NetworkStation.

15.6. OPERATOR POSITION


Operator Positions are clients to access the 5620 NM database through an Operator Server
NetworkStation.
You can use a Sun workstation using Solaris or a PC using Hummnigbirds Exceed software as
Operator Positions. For effective bandwidth utilization and remotely located Operator Positions,
Hummingbirds Exceed on Demand (EoD) software (V6.0 preferred) or GraphOns Go-Global for
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

143

Unix/Linux software (V2.2.4 preferred) is recommended. Please refer to section "Bandwidth


Requirements" for more information.
Engineering Note: Operator Position
Both EoD and Go-Global products have a client application and a server application. Alcatel-Lucent
recommends installation of the server application on the Operator Servers. The Operator Server
platform engineering guidelines are unchanged with these products installed.

Can be collocated on a 5620 NM NetworkStation (Standalone) without redundancy

5620 NM
Database
NetworkStatio
n (standby)

5620 NM
Database
NetworkStatio
n (active)

CPSS Router
NetworkStatio

IP
Network

CMIP/CORBA
Server

7670 RSP

Operator
Server

Operator
position

Statistics Collector
NetworkStation

Operator Operator
position position

7670 ESE

Figure 35 : NetworkStations in a 5620 network

15.7. HARDWARE REQUIREMENTS


The minimum platform requirement for an integrated 5620 NM configuration to manage a small
network of CPSS-managed nodes is a Sun Server with:
-

2 x 1 GHz (or more) Sun SPARC CPU


4 Gbytes of RAM
80 GB hard disk
1000 Mbps Ethernet Link
CD/DVD Drive
Solaris 10

These requirements are appropriate for test/trial networks with up to 2 7670 switches and 5 user
sessions. Such a server can perform the following functionality simultaneously:
-

Standalone Database NetworkStation


Operator Server NetworkStation

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

144

CPSS Router Networkstation


Statistics Collector NetworkStation

It is mandatory to use a separate Sun server for the CPSS Router NetworkStation in a network that
has more than 24 CPSS links terminating at the 5620 NM Database Networkstation. Please refer to
Section "7670 Node Types" to optimize the number of CPSS links to the 5620 Network Stations. In
this case, the minimum platform requirements for a standalone CPSS Router NetworkStation are a
Sun Server with:
-

1 x 500 MHz (or more) Sun SPARC CPU


256 MB of RAM
18 GB hard disk
100 Mbps Ethernet Link
CD/DVD Drive
Solaris 10

The minimum platform requirement for a CMIP/CORBA server (which is mandatory on a separate
server) is a Sun server with:
-

2 x 1 GHz (or more) Sun SPARC CPU


2 Gbytes of RAM
20 GB hard disk
100 Mbps Ethernet Link
CD/DVD Drive
Solaris 10

For more information on required hardware to manage the 7670 in your network, please contact your
local Alcatel-Lucent representative.

15.8. BACKUP & RESTORE


On the 5620 NM a backup can be performed while the 5620 NM database is in use because the
backup function does not lock the database.
There are two backup options:
- database
- logical log
Database backup backs up the entire database, allowing you to recover the database if there is a disk
failure. You can use the db_backup script to back up a 5620 NM database to:
- a tape
- a disk
Engineering Note: 5620 Backup And Restore
Engineering Note 1 - If you disable the scp command (secure copy), you cannot perform
uncompressed database transfers between the active and standby Database NetworkStations, and
you may not be able to perform remote logins. To support redundancy, Alcatel-Lucent recommends
that you use compressed database transfers if the scp command is disabled.
Engineering Note 2 - You can only perform a database backup on the active Database
NetworkStations.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

145

A logical log backup is an incremental backup, which includes only those changes that have been
made to the database since the last backup. You enable the logical log during installation of the 5620
NM.
When the 5620 NM database is lost or inconsistent, you can restore the database from your backup
directory on your disk or tape. You can use the db_restore script to perform a full system restore. The
script also asks you whether you want to salvage the logical log and perform a restore of all logical log
backups.
For the db_restore script to be successful, the hardware configuration must be the same as it was
when the database backup was performed.
Choosing to salvage the logical log makes the recovery process more complete. The salvage process
backs up the logical log files that were not backed up before the failure occurred. By recovering these
files, the recovery process recovers all changes that were made to the database, up to the point of
failure.
For the 5620 CMIP/CORBA server, it is recommended to backup the MIB to a backup directory so
that, if the MIB becomes corrupted, the backup MIB can be used to restore the MIB. The CE_control
mib-backup command saves a copy of the MIB in a backup file that can be recovered using the
CE_control mib-restore command. The 5620 CORBA/CMIP database must be running to perform the
backup or restore procedure.

15.9. NETWORK INTERFACE/IP ADDRESSING


5620 NM and 7670 RSP/ESE uses an Alcatel-Lucent proprietary packet-switched network
management protocol for communication called CPSS (Control Packet Switching System). CPSS
messages carry:
-

control information
alarm information
performance information
configuration Status information
timing information
routing messages

CPSS messages are delivered by a means of address indicators (CPSS address). Each element in a
network (except CMIP/CORBA server, Operator Positions) must be assigned a unique address to
enable this identification process. The CPSS address is made up of:
-

Domain Number identifies the top level of network messaging subdivision or domain. Domain
numbers can be from 1..60. Each domain number must be unique within the network
Major Node Number identifies the node (eg 3600, 8230, etc.) that is part of the specified domain.
Node numbers can be from 1..1023. Each major node number must be unique within the domain.
Minor Node Number identifies individual card types (eg control, FRE, FRS, DCP etc.) resident on
a node that have addressable capabilities. These cards have individual functions and operate as a
node within a node.

Engineering Note: Minor Node Number


Minor node numbers do not have to be configured by the user; they are either fixed or randomly
generated. We will exclude the minor node number in context of this document.

A CPSS address is defined in the format <domain_number>.<major_node_number>.


Following are the recommended CPSS node numbers to use for different 5620 NM equipment:
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

146

Node Number

Server Type

1023
1021
769 to 1020 (except
1000, 1001)

Standalone and active Database NetworkStation (Mandatory)


Standby Database NetworkStation (Mandatory)
Collector, Aggregator, Operator Server and CPSS Router Networkstation

Table 66 : CPSS Addressing


Engineering Note: 5620 IP Addressing
Engineering Note 1 - Do not use node number 1022 for any 5620 NetworkStations.
Engineering Note 2 - The CMIP/CORBA server does not require CPSS address.
Engineering Note 3- Operator Positions do not require CPSS address
Engineering Note 4 - For redundant CPSS Router NetworkStation configurations, assign the active
CPSS Router NetworkStation a lower CPSS address than the standby CPSS Router NetworkStation.
A CPSS network is a logical network that overlays the physical network. This is done over an Ethernet
network using TCP/IP between the different network nodes.
The number of IP addresses required for the 5620 NM depends on the number of servers deployed in
the network. Each 5620 server requires 1 IP address.
Engineering Note 5 - The 5620 does not support IPMP functionality.
Engineering Note 6 - The 5620 does not support multiple interfaces for OAM, NE and B&R

15.9.1 CONVERTING CPSS ADDRESS TO IP ADDRESS


CPSS address given to a particular node can be converted to an IP address using two methods:
-

Using the /opt/netmgt/install/ipcvt tool.


Manual calculation.

Following is the manual method to perform this calculation:


Format of IP address is: A.B.C.D
Format of CPSS Address is: X.Y (excluding minor node number)
In addition, an extra variable Z is considered.
A = the Class A IP Address.
By default the 5620 uses the value "10". The value of the Class A IP Address can also be determined
in the /opt/netmgt/bin/NL_link.conf file of the 5620 NM workstation.
B = (X * 4) + Z
C = (Y - (Z * 64 )) * 4 + 3
D = this is a constant which always equals 253.
X = Domain number.
Y = Major node number.
Z = INT (Y / 64)
UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

147

Here's an example using CPSS address of 20.972:


First, determine the value of Z:
Y = 972 (i.e. the major node number of the CPSS address)
Z = INT ( Y / 64 )
Z = INT ( 972 / 64 )
Z = INT ( 15.19 )
Z = 15

Engineering Note: Address conversion


The INT function converts a number to an integer value by removing any decimal places.

Using the value for Z, calculate the value of the variable B:


X = 20 (i.e. the domain number of the CPSS address)
Z = 15
B=(X*4)+Z
B = ( 20 * 4 ) + 15
B = 95
Next, calculate the value of C:
Y = 972
Z = 15
C = ( Y - ( Z * 64 )) * 4 + 3
C = ( 972 - ( 15 * 64 )) * 4 + 3
C = 51
All variables of the IP address are now calculated.
A = 10
B = 95
C = 51
D = 253
So the nodes IP address is 10.95.51.253

15.10. BANDWIDTH REQUIREMENTS


For bandwidth between Operator Position (to Operator Server) using a Sun workstation using Solaris
or a PC using Hummnigbirds Exceed software, the network should be engineered to a minimum of 1
Mbps per user and latency of 30 ms.
For bandwidth between Operator Position (to Operator Server) using Hummingbirds Exceed on
Demand (EoD) or GraphOns Go-Global for Unix/Linux software, the network should be engineered to
a minimum of 128 kbps per user and latency of 200 ms.
For bandwidth requirements between the different 5620 NM NetworkStations (i.e. Database, Operator
Server, CPSS Router, CMIP/CORBA, Statistics Collector, etc), it is recommended to have all of them
on the same LAN with a speed of 1000 Mbps.
Bandwidth requirements between the 5620 NM NetworkStation and 7670 nodes are <to be
determined>.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

148

15.11. 7670 NODE TYPES


In order to reduce the number of CPSS links to the 5620 NM and to increase the efficiency of CPSS
routing by decreasing the size of routing tables in the 7670 nodes, there is a concept of nodal
hierarchy. There are four types of CPSS nodes:
-

Stub node - A stub node is a 7670 that terminates or originates CPSS traffic. Stub nodes do not
route CPSS traffic.

Routing node - A routing node is a 7670 that can route CPSS traffic. Such nodes connect to more
than one node.

Gateway node - A gateway is a routing node that handles CPSS communications between the
Network Management System (5620) and a CPSS domain. Every domain in the network has at
least one routing node that is designated as a gateway. A gateway node has a direct CPSS link to
the NMS and routes CPSS packets to the rest of the domain. Gateway links from the gateway
nodes must terminate on either a Database or Router NetworkStation.

Leaf node - A leaf node is a 7670 that can route CPSS traffic only to the single node to which it is
linked. The leaf node derives its CPSS address from the node to which it is linked

For more information, please refer to 5620 NM User Guide.

15.12. 7670 INTEGRATION TO WMS


In OAM06, a new feature introduces fault management support of the 7670 ESE/RSP directly by the
WMS and reach-through from the WMS server to the 5620 NM for all other management functions
(configuration, performance).
Fault Management is done using SNMP interface between the 7670 ESE/RSP and WMS. All Fault
Management capabilities on the WMS apply to the alarms generated by the 7670 including 3GPP
OSS Interface to receive these alarms.
Configuration and Performance Management is covered by the 5620 NM. The 5620 NM GUI can be
launched in-context from the WMS client GUI.
Figure below provides the diagrammatic view of the support of 7670 RSP/ESE through the WMS.
In terms of capacity, the number of 7670 RSP/ESE supported is equal to the number of RNC
supported by the particular WMS server hardware type taking into assumption 1 7670 switch is
required per RNC.

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

149

7670 ESE

7670 RSP/ESE

Node B

RNC

Node B

5620

RNC

WMS

CM/FM/PM
FM
Reach through

Figure 36 : 7670 Network Management from WMS

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

16.

150

ANNEXES

In this section, volumetric information is provided with regards to the sizes of the different data per
application. This information is an average value with some example of configuration.
This is indented for customer & services teams in case of volumetric exercise. (E.g.: data space
reservation within backup & restore infrastructure, manual file management within a given repository
or media, etc...)
In this section, the minimum throughput requirements are with regards to data transfer between
applications through a WAN with IP routers except for remote connection to Network Element.
Within the Ethernet LAN of the ROC, every client should be connected to an Ethernet switch through a
100/1000Mbps connection. And every Ethernet port of every server must be connected to an Ethernet
switch through a 1000 Mbps. This guarantees the best KPI for the data transfer within the LAN.
In a context of remote connection through a WAN, the throughput information below has to be
considered as a minimum to comply with the WMS KPI. This is indented for customer & services
teams for the configuration of the transmission nodes through the WAN to guarantee that the traffic
rate operate at the specified level.
For communication with Network Element, the Routing Switch which provides aggregation of the
ATM/WAN interfaces to remote Network Elements must comply with the generic Minimum
Throughput Requirements provided in section 8.2

16.1. OBSERVATION FILES


16.1.1 VOLUMETRIC INFORMATION
Conditions of measurements are:
-

observation files in ZIP format,


RNC data considering both files produced on C Node and I Node,
BTS configured with 3 cells
And a default counter list activated on the RNC,

RNC with CP4


Configured with 10 939x BTS
(OneBTS)
UA6

RNC with CP3


Configured with 100 BTS
(dNodeB2U)

UA6

UMT/OAM/APP/024291

01.09/ EN Standard

RNC

BTS

150 kiloBytes

4 kiloBytes

RNC

BTS

185 kiloBytes

6 kiloBytes

2009-2010 Alcatel-Lucent

Alcatel-Lucent

151

RNC with 2*CP4 (8 DCPS / 2 MS3)


Configured with 450 BTS
(dNodeB2U)

UA6

RNC

BTS

680 kiloBytes

6 kiloBytes

16.1.2 MINIMUM THROUGHPUT REQUIREMENTS


The scenarios for the transfer of observation files through a WAN mainly apply to a remote PM OSS
or for the usage of MS-NPO in a context of multi technology environment for remote ROC (e.g.:
2G/3G)

Data center

Data center

(NOC or ROC OMC 2G)

MS PORTAL
NPO

OSS PM

Minimum
Throughput

WAN
Private/Public IP
backbone

Minimum
Throughput

Data center
(ROC WMS)

WMS

The maximum deadline for files availability in NPO, including loading, must be in general under 1/3 of
the configured GP0 (General permanent observation) period.
This 1/3 GPO is an absolute period within file transfer occurs continuously, including regular pooling
activity, file parsing, and the loading of data within NPO oracle database. With regard to the pure
file transfer activity, the duration usually takes 10% of the 1/3 GPO.
To guarantee the NPO performance with regards to basic recovery scenarios (e.g.: missing data, loss
of connection that imply managing more data within a same GPO), the quantity of data to be managed
by NPO has to be double accordingly.
As a consequence, the General Minimum throughput requirement for a nominal NPO usage is defined
as follow:

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

152

General Minimum throughput requirements (in kbps)

Srnc i = Size of the RNC i observation file (in kiloBytes) under a given configuration (e.g.: 185
kiloBytes for each RNC with CP3 configured with 100 dNodeB2U BTS)
Nbrnc i = Number of RNC i
Sbts i = Size of the BTS i observation file under a given configuration (e.g.: 6 kiloBytes for each
dNodeB2U BTS configured with about 3 cells)
Nbbts i = Number of BTS
GPO: The minimum general permanent observation period (in seconds) configured on the BTS
Network elements (e.g.: 900 seconds).

Example for 30 RNC (with CP3 board) and 3000 BTS (15 minutes GPO activated) considering that
each RNC is configured in average with 100 dNodeB2U BTS each (185 kiloBytes): The general
Minimum throughput requirement through a remote channel becomes 19 mbps.
Engineering note: Defining Minimum throughput requirements
The Size of the Network Element observation files Network Element and the corresponding average
value, is specific to a customer network configuration (RNC Card configuration, Number of cells per
BTS, Number of counters activated per RNC, etc...).
It not has to be measured, before applying the formula above to determine in the best condition the
Minimum throughput requirements.

16.2. NE SOFTWARE
The scenario for the transfer of NE software files through a WAN applies for the remote Software
repository management.

16.2.1 VOLUMETRIC INFORMATION


Below, assumptions for the size of NE software:

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

NE Software Size

in MegaByte
UA6

153

RNC
1000

BTS

(iBTS 19)
50

BTS

BTS

(OneBTS)

(Pico/Micro)

40

16.2.2 MINIMUM THROUGHPUT REQUIREMENTS


There are no specific recommendations except that the connectivity with the remote SRS (Software
repository site) should give acceptable KPI to guarantee a minimum of operational need to reduce the
upgrade window maintenance.

19

iBTS can be iNode-B, Macro Node-B, distributed Node-B, digital 2U Node-B, digital Compact Node-B, RRH (Remote Radio
Head) or Radio Compact Node-B

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

154

17.

ABBREVIATIONS

ADI

Access Data Interface

API

Application Program Interface

ATM

Asynchronous Transfer Mode

ASCII

American Standard code for Information Interchange

BB

Building Block

CD

Compact Disk

CMIP

Common Management Information Protocol

CORBA

Common Object Request Broker Architecture

CPSS

Control Packet Switching system

CPU

Central Processing unit

DCN

Data Communication Network

DHCP

Dynamic Host Configuration Protocol

DNS

Dynamic Name Server

IPMP

Internet Protocol Multi Pathing

IPSec

Internet Protocol Security

LAN

Local Area Network

MDM

Multi-service Data Manager

MIB

Management Information Base

NE

Network Element

NEBS

Network Equipment-Building System

NOC

National Operation Centre

NM

Network Manager

NMS

Network Management System

NPO

Network Performance Optimizer

OAM

Operations Administration and Maintenance

OSS

Operations Support Systems

RADIUS

Remote Authentication Dial-In User Service

RAMSES

Remote Advanced Monitoring System for Engineering Support

RFO

Radio Frequency Optimizer

RNC

Radio Network Controller

ROC

Regional Operation Centre

UMTS

Universal Mobile Telecommunication System

UTRAN

UMTS Terrestrial Radio Access Network

VPN

Virtual Private Network

WAN

Wide Area Network

WMS

Wireless Management Solution

WQA

W-CDMA Quality Analyzer

UMT/OAM/APP/024291

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Alcatel-Lucent

155

Wireless Management System


WMS Product Engineering Guide

2009-2010 Alcatel-Lucent
All rights reserved.
UNCONTROLLED COPY: The master of this document is stored on an electronic database and is write
protected; it may be altered only by authorized persons. While copies may be printed, it is not recommended.
Viewing of the master electronically ensures access to the current issue. Any hardcopies taken must be
regarded as uncontrolled copies.
ALCATEL-LUCENT CONFIDENTIAL: The information contained in this document is the property of Alcatel-Lucent.
Except as expressly authorized in writing by Alcatel-Lucent, the holder shall keep all information contained
herein confidential, shall disclose the information only to its employees with a need to know, and shall protect
the information from disclosure and dissemination to third parties. Except as expressly authorized in writing
by Alcatel-Lucent, the holder is granted no rights to use the information contained herein. If you have
received this document in error, please notify the sender and destroy it immediately.

Document number:
Document issue:
Document status:
Product Release:
Date:

UMT/OAM/APP/024291

UMT/OAM/APP/024291
01.09/ EN
Standard
OAM 6.0
February 2010

01.09/ EN Standard

2009-2010 Alcatel-Lucent

Das könnte Ihnen auch gefallen