Sie sind auf Seite 1von 174

GSM BSC 3000 and TCU 3000

Description - Technical
1075A

Student Guide
Guide release: 16.03
Guide status: Standard
Date: November, 2006

411-1075A-001.1603

FOR TRAINING PURPOSES ONLY


Copyright © 2006 Nortel Networks. All rights reserved.

The information contained in this document is the property of Nortel Networks. Except as specifically
authorized in writing by Nortel Networks, the holder of this document shall not copy or otherwise
reproduce, or modify, in whole or in part, this document or the information contained herein. The
holder of this document shall protect the information contained herein from disclosure and
dissemination to third parties and use the information solely for the training of authorized individuals.

THE INFORMATION PROVIDED HEREIN IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
KIND. NORTEL NETWORKS DISCLAIMS ALL WARRANTIES, EITHER EXPRESSED OR
IMPLIED, INCLUDING THE WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE. IN NO EVENT SHALL NORTEL NETWORKS BE LIABLE FOR ANY
DAMAGES WHATSOEVER, INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL,
LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, ARISING OUT OF YOUR USE OR
RELIANCE ON THIS MATERIAL, EVEN IF NORTEL NETWORKS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

Information subject to change without notice.


Nortel, Nortel Networks, the Globemark device, and the Nortel Networks logo are trademarks of
Nortel Networks.

Visit us at: nortel.com/training

NORTEL CONFIDENTIAL – FOR TRAINING PURPOSES ONLY


Course introduction

Overview
Description
This course is a comprehensive technical description of the BSC3000
and TCU3000 products. This course applies to V16 release of the
BSS.

Intended audience
This course is designed for people who need to know the functions
and architecture of the BSC3000 and TCU3000.

Prerequisites
This course has the following prerequisites:
• 1061A: GSM GPRS System Overview - Technical

Objectives
After completing this course, you will be able to:
• Describe the physical and functional architecture of the BSC 3000
and TCU 3000,
• Describe module functions and interfaces,
• Trace the signaling and traffic paths inside and outside the
equipment.

NORTEL CONFIDENTIAL – FOR TRAINING PURPOSES ONLY


References
The following documents provide additional information:

NTP 411-9001-126 BSC / TCU 3000 Reference Manual

NORTEL CONFIDENTIAL – FOR TRAINING PURPOSES ONLY


Contents

1. Introduction

2. BSC 3000 and TCU 3000 Presentation

3. BSC 3000 and TCU 3000 Architecture

4. Data Flow Exercises

5. BSC 3000 and TCU 3000 Operation

6. BSC 3000 and TCU 3000 Maintenance and


Enhanced Exploitability

7. BSC 3000 and TCU 3000 Provisioning

8. Exercises Solutions

NORTEL CONFIDENTIAL – FOR TRAINING PURPOSES ONLY


Publication History
Version Date Comments
15.01/EN July, 2005 New reference name (formerly PR4)
Compliant with V15.1 BSS Release
15.02/EN November, 2005 Compliant with V15.1.1(Preliminary)
BSS Release
15.03/EN May, 2006 Compliant with V15.1.1(Standard) BSS
Release
16.01/EN June, 2006 Compliant with V16 (Preliminary) BSS
Release
16.02/EN October, 2006 Compliant with V16 (Standard) BSS
Release
16.03/EN November, 2006 New template

NORTEL CONFIDENTIAL – FOR TRAINING PURPOSES ONLY


Section 1

Introduction

nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY

1-1
About Knowledge Services

> Knowledge Services offers three programs to help you


get the most out of your Nortel solutions.
• Training with a focus on eLearning
• Certification
• Documentation
> Making the global transition to “e”
• We are transitioning many of our programs so we can meet the
demands of the 21st century; including a new focus on
eLearning, an industry-leading certification program, new
opportunities to save, vehicles for electronic communication to
keep you in the know, and more.

411-1075A-001.1603
1-2 November, 2006
FOR TRAINING PURPOSES ONLY

Knowledge Services programs help you speed your time to proficiency.


Through our programs, you can:
• Save time and money on quality, comprehensive training with our new
eLearning portfolio
• Build the foundation for skills needed to successfully achieve certification
through our training programs
• Gain hands-on experience with Nortel Networks solutions through our
advanced lab courses
• Demonstrate and validate your knowledge and hands-on skills by achieving
certification through our industry-leading certification program

1-2
Nortel Homepage

411-1075A-001.1603
1-3 November, 2006
FOR TRAINING PURPOSES ONLY

www.nortel.com

1-3
Training & Certification Page

411-1075A-001.1603
1-4 November, 2006
FOR TRAINING PURPOSES ONLY

www.nortel.com
• Select Training
• Select the appropriate product family …
• …Choose a product…
• …And get the content

Select the appropriate geographic region and language - allows you to customize
your view

Point of Contacts:
• CAMs (Customer Account Managers) – The customer can direct
questions/issues to their internal training prime, who can be in contact with the
Nortel CAM.
• CSRs (Customer Service Rep) of regional calling center number
• Instructor – provide business cards/email address/phone number

1-4
Training Page

411-1075A-001.1603
1-5 November, 2006
FOR TRAINING PURPOSES ONLY

Page that appears when “Training” is selected


Depending on your selection, you see the training offer in your region (NA, EMEA,
ASIAPAC, CALA) or the global offer.

1-5
Curriculum Paths Page

411-1075A-001.1603
1-6 November, 2006
FOR TRAINING PURPOSES ONLY

Page that appears when “Curriculum Path” is selected.


You can select the appropriate training according to your job function.

1-6
Technical Documentation

411-1075A-001.1603
1-7 November, 2006
FOR TRAINING PURPOSES ONLY

www.nortel.com
Select Support & Training
Select Technical Documentation

1-7
GSM BSS Nortel Technical Publications

Concepts Upgrading Configuring Administration Operations Fault and Performance Management


and Security

BSS Documentation New in this Release BSS CT2000 BSS Fundamentals - BTS S12000
roadmap OMC-R Commands OMC-R Commands Troubleshooting
411-9001-088 Fundamentals Operating Principles
411-9001-000 411-9001-137 Reference - Reference- 411-9001-144
BSS Overview Security, Administration, 411-9001-007 Configuration, Fault Management -
411-9001-001 WPS for PCUSN SMS-CB, and Help Maintenance Principles
Configuration Procedures BSS Configuration - Performance, and
menus 411-9001-039
OMC-R Fundamentals 411-9001-201 Operating Procedures Maintenance menus
411-9001-006 411-9001-130 411-9001-129 BTS 18000
WPS for PCUSN 411-9001-034 Troubleshooting
BTS 411-9001-162
S8000/S8002/S8003/ Installation& BSS Performance
Administration OMC-R Commands Management -
S8006 Fundamentals BTS
411-9001-063 411-9001-202 Reference – Observation Counters
Dictionary S8000/S8002/S8003/
BTS e-cell Fundamentals BSS CT2000 Objects and Fault menus 411-9001-125 S8006 Fault Clearing
411-9001-092 Configuration - 411-9001-128 411-9001-103
PCUSN Fundamentals Procedures BSS Performance
BSS Parameter Management - BSS Fault Clearing
411-9001-091 411-9001-148 Reference Observation Counters
BSC 3000/TCU 3000 CT2000 Configuration 411-9001-124 Fundamentals Advanced Maintenance
Fundamentals Reference RACE Fundamentals and 411-9001-133 Procedures
411-9001-126 411-9001-804 Commands Reference TML (BTS) 411-9001-105
BTS S12000 411-9001-127 Commissioning and Fault
Fundamentals CT2000 Installation& Management PCUSN Fault Clearing
411-9001-142 Administration 411-9001-051 411-9001-106
BTS 18000 411-9001-149
TML (BSC 3000/TCU BTS S12000 Fault
Fundamentals 3000) Commissioning Clearing
411-9001-160 and Fault Management 411-9001-143
WPS for PCUSN 411-9001-139
BTS 18000
Fundamentals OMC-R Routine Fault Clearing
411-9001-802 Maintenance and 411-9001-161
BSS Terminology Troubleshooting Call Trace/Call Path
411-9001-032 Trace Analyzer
411-9001-803
BSC 3000/TCU 3000 Performance
WQA Fundamentals Management
411-9001-205 Troubleshooting 411-9001-060
411-9001-132
BSC 3000/TCU 3000
e-cell Troubleshooting Fault Clearing
411-9001-090 411-9001-131
BTS S8000/S8003 WQA Installation and
Troubleshooting Administration
411-9001-048 411-9001-206
WQA Configuration
BTS S8002 Procedures
Troubleshooting 411-9001-207
411-9001-084

411-1075A-001.1603
1-8 November, 2006
FOR TRAINING PURPOSES ONLY

GSM BSS Nortel Technical Publication


This suite is sorted by job functions category.

1-8
Course Objectives

> Describe the physical and functional architecture of the


BSC3000 and TCU3000
> Describe the modules functions and interfaces
> Trace the signaling and traffic paths inside and outside the
equipment
> Explain how to operate and maintain BSC3000 and
TCU3000

411-1075A-001.1603
1-9 November, 2006
FOR TRAINING PURPOSES ONLY

1-9
Course Contents

> Introduction
> BSC 3000 and TCU 3000 Presentation
> BSC 3000 and TCU 3000 Architecture
> Data Flow Exercises
> BSC 3000 and TCU 3000 Operation
> BSC 3000 and TCU 3000 Maintenance and Enhanced
Exploitability
> BSC 3000 and TCU 3000 Provisioning
> Exercises Solutions

411-1075A-001.1603
1-10 November, 2006
FOR TRAINING PURPOSES ONLY

1-10
Student notes:

1-11
Student notes:

1-12
BSC 3000 and TCU 3000
Presentation

Section 2

nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY

2-1
Objectives

After this module of instruction, you will be able to


> list the BSC 3000 functions
> list the TCU 3000 functions

411-1075A-001.1603
2-2 November, 2006
FOR TRAINING PURPOSES ONLY

2-2
Contents

> BSS in GSM Network


> BSC Functions
> TCU Functions
> BSC 3000 and TCU 3000
> BSC 3000 and TCU 3000 Hardware

411-1075A-001.1603
2-3 November, 2006
FOR TRAINING PURPOSES ONLY

2-3
BSS in GSM Network
Public Switched S8000
BSS
Telephone Network Outdoor
TRAU BTS
e-Cell
(TCU)
BTS
NSS

A Interface Radio
MSC Interface

Ater Interface
S12000
BSC Indoor
OMC-R BTS

MS
Abis Interface BTS
18010 Radio
OMN Interface
Sun
StorEdge A5000 Interface
Agprs Interface

GPRS BTS
18020
Core Combo
Network Gb Interface
Internet PCUSN

MS
411-1075A-001.1603
2-4 November, 2006
FOR TRAINING PURPOSES ONLY

The Base Station Subsystem includes the equipment and functions related to the
management of the connection on the radio path.
It mainly consists of one Base Station Controller (BSC), and several Base
Transceiver Stations (BTSs), linked by the Abis interface.
An optional equipment, the Transcoder/Rate Adapter Unit (TRAU) so called
TransCoder Unit (TCU) within Nortel Networks BSS products, is designed to
reduce the number of PCM links.
These different units are linked together through specific BSS interfaces:
• Each BTS is linked to the BSC by an Abis interface.
• The TCUs are linked to the BSC by an Ater interface.
• The A interface links the BSC/TCU pair to the MSC.
• The Agprs interface links the BSC to the PCUSN.

2-4
BSC Functions
1 - Basic Functions
Terrestrial Resources Management

BTS
MSC
BTS

Routing
BTS

BTS

Traffic Concentration
SMS-CB
Management

CAUTION: CRASH
ON E12 HIGHWAY

411-1075A-001.1603
2-5 November, 2006
FOR TRAINING PURPOSES ONLY

The basic functions of the BSC are the following:


• Terrestrial resource management:
— setup/release of terrestrial channels,
— channel switching between MSC and BTS.
• Radio resource management:
— setup/release of radio channels,
— radio channel monitoring.
• Traffic concentration on the Ater interface.
• Short Message Service - Cell Broadcast management:
— broadcasts short messages defined on OMC-R that are towards target
cells.

2-5
BSC Functions
2 - OA&M Functions
BTS and TCU Management

Shut down

Startup

Supervision Observation

OMC-R Interface Management

Data +
Software
Ethernet

411-1075A-001.1603
2-6 November, 2006
FOR TRAINING PURPOSES ONLY

The main OA&M functions of the BSC are the following:


• BTS and TCU management:
— software downloading,
— initialization,
— supervision,
— configuration and reconfiguration,
— observations.
• OMC-R Interface management which consists of:
— managing links with the OMC-R,
— providing the services requested by the OMC-R,
— storing the BSS configuration data: software storage and distribution
among the various entities of the BSS.

2-6
TCU Functions

BTS BSC MSC Premises

16 kbps Abis Ater A


16 kbps
1 x 64 kbit/s TCU MSC
16 kbps
16 kbps 4x64 kbit/s

4 speech channels
+ signaling
or 4 data channels Converts the GSM speech frames into
PSTN/ISDN A-Law or µ-Law speech.

Also called "TRAU" for Transcoder and Rate Adapter Unit.

411-1075A-001.1603
2-7 November, 2006
FOR TRAINING PURPOSES ONLY

The concept of remote transcoders is used to convey four multiplexed channels at


16 kbps onto a single 64 kbit/s PCM channel.
Multiplexing is implemented within the BTS, thus the number of PCM links needed
on the Abis interface is reduced.
The TCU enables code conversion of 16 kbit/s channels from the BSC into
64 kbit/s channels for the MSC in both directions.
TCU is the product designation of Nortel for the TRAU (Transcoder and Rate
Adapter Unit) specified in the GSM recommendations.

2-7
BSC 3000 and TCU 3000
1 - Physical Presentation

BSC 3000 TCU 3000

411-1075A-001.1603
2-8 November, 2006
FOR TRAINING PURPOSES ONLY

The BSC 3000 and the TCU 3000 are one-cabinet equipment assemblies,
composed of two Nodes and one Service Area Interface.
These Nodes are each housed in a sub rack comprising two shelves.
The cabinet is designed for indoor applications.
The design allows a front access to the equipment.
External cabling from below or above is supported.
The Service Area Interface or SAI is installed on the left side of the cabinet:
• it provides front access to the PCM cabling,
• it contains electrical equipment used to interface the BSC or the TCU and the
customer cables.
The product is EMC compliant. No rack enclosure is required for this reason, as
EMC compliance is achieved at the sub rack level (Control Node, Interface Node
and Transcoding Node).

2-8
BSC 3000 and TCU 3000
2 - Physical Description
BSC 3000 TCU 3000
Power Supplies

Fans

Service Service
Control Transcoding
Area Area
Node Node
Interface Interface
(PCM cabling) (PCM cabling)

Fans

Interface Transcoding
Node Node

411-1075A-001.1603
2-9 November, 2006
FOR TRAINING PURPOSES ONLY

The BSC 3000 is a one-cabinet equipment, composed of two Nodes and one
Service Area Interface.
The two BSC 3000 Nodes are the Control Node and the Interface Node.
In addition, the Control Node (in charge of Call Processing and OA&M) of the BSC
3000 implements a Fault Tolerant architecture, based on redundancy of processes
and a load balancing mechanism on the processors, allowing fast recovery of
service (within a few seconds) after a hardware failure.
The TCU 3000 is a one-cabinet equipment, composed of up to two Transcoding
Nodes and one Service Area Interface.
The power supply for both the BSC 3000 and TCU 3000 is –48 V dc.
The maximum power consumption of the BSC 3000 or TCU 3000 is 2 kW.
Each Node (sub rack) is powered by one rack power distribution tray.
Each sub rack is cooled by four fans (replaceable). The fan rack is also referred to
as the Cooling Unit assembly.

2-9
BSC 3000 and TCU 3000
3 - Mixed System Architecture
OMC-R
X.25 V15

TCU 2G TCU 2G TCU 2G PCU SN


V14.3 TCU 3000 V14.3
V14.3
V15

Ethernet
BSC 2G
V14.3 BSC 3000 BSC 3000
V15 V15

BTSs
BTSs BTSs
V12.4
V15 V15
V14.3

411-1075A-001.1603
2-10 November, 2006
FOR TRAINING PURPOSES ONLY

The V15 release will introduce the enhanced capacity on the BSC 3000 for Edge
functionalities.
The BSC 3000 and TCU 3000 are intended to interwork with current BSC 12000,
BTS and OMC-R products.
These products will require a software upgrade to deal with the BSC 3000 and
TCU 3000.
The OMC-R/BSC 3000 link is TCP/IP over Ethernet, instead of native X.25 for
BSC 12000.
The OMC-R/BSC 3000 link over A/Ater Interface is not available in V15.1.1

2-10
BSC 3000 and TCU 3000 Hardware
1 - Cabinet Structure and Cooling
900
600
300 600

2200

Front View Side View


411-1075A-001.1603
2-11 November, 2006
FOR TRAINING PURPOSES ONLY

The frame dimensions (ETSI standard) are 60 x 60 x 220 centimeters.


The total dimensions of the BSC 3000 or TCU 3000 cabinet (frame + SAI) are as
follows:
W = 90 cm, D = 60 cm, H = 220 cm.
The maximum weight of the BSC 3000 or TCU 3000 equipment is 570 kg
(BSC 3000+TCU 3000 = 1100 kg). This yields a maximum floor load of 1000
kg/m2.
The BSC 3000 and TCU 3000, being totally front access equipment, can be
installed back to back or back to wall.
The work space required in front of the cabinet is 60 to 90 cm width.
The cooling unit supports four fan units and an air filter for the equipment is
mounted above an air plenum to direct cooling air to the fans.
These four fan units are individually replaceable from the front.
For climatic and thermal conditions, the BSC 3000 and TCU 3000 are compliant
with:
• temperature: –5 °C to +45 °C,
• relative air humidity: 5% to 90%, in operating conditions.

2-11
BSC 3000 and TCU 3000 Hardware
2 - Generic Module

LED Description Meaning

Round shape, Yellow color Read/write status

Triangular shape, red color


module status
Rectangular shape, green color

411-1075A-001.1603
2-12 November, 2006
FOR TRAINING PURPOSES ONLY

The term “module” refers to a circuit pack enclosed by a metallic housing.


Packaging circuit packs in modules provides the following features and benefits:
• a single level of EMC shielding,
• radiated containment across boards within a shelf,
• defined volume control of environmental noise,
• ElectroStatic Discharge protection for circuit packs,
• handling ruggedness,
• minimizes EMC retesting for new designs,
• provides visual indicators (LED) on the front plate.
All BSC 3000 & TCU 3000 modules have the same LEDs on the upper part of the
front plate of each module to ease on-site maintenance and reduce the risk of
human error.

2-12
Student notes:

2-13
Student notes:

2-14
BSC 3000 and TCU 3000
Architecture

Section 3

nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY

3-1
Objectives

After this module of instruction, you will be able to


> List the external interfaces and associated protocols of the
BSC 3000 and TCU 3000
> List the different modules of the Control Node, Interface
Node and Transcoding Node
> Describe the role of each module

411-1075A-001.1603
3-2 November, 2006
FOR TRAINING PURPOSES ONLY

3-2
Contents

> BSC/TCU 3000: External Links > Interface Node


> BSC 3000 and TCU 3000 Generic > ATM RM
Architecture
> Switching Unit
> BSC 3000: Control Node
> Low Speed Access Resource
> Control Node: ATM Platform Complex
> Control Node Architecture > TCU 3000: Transcoding Node
> Operation and Maintenance > Common Equipment Module
Unit
> Transcoder Resource Module
> Mass Memory Storage
> Internal PCM S-Link Allocation
> Traffic Management Unit
> Maintenance Trunk Module
> ATM Subsystem Bus
> ATM Switch Module > Shelf Interface Module
> BSC 3000: Interface Node > Service Area Interface

411-1075A-001.1603
3-3 November, 2006
FOR TRAINING PURPOSES ONLY

3-3
BSC/TCU 3000: External Links
PCUSN
OMC-R

ML
GPRS

LAPD O

RSL
LAPD
Agprs
Ethernet

BTS BSC TCU MSC

LAPD GSL
Data
LAPD
OML LAPD
LAPD OML
RSL
LAPD
GSL
SS7

Voice
Data
Abis Ater

411-1075A-001.1603
3-4 November, 2006
FOR TRAINING PURPOSES ONLY

Three types of signaling are transported over the Abis interface:


• LAPD/OML related to the Operation and Maintenance,
• LAPD/RSL for the Radio Signaling Link,
• LAPD/GSL for the GPRS Radio Signaling Link.
The BSC can be connected to the OMC-R through an Ethernet network or through
the A interface.
Two types of signaling are transported over the Ater interface:
• LAPD/OML for control of the TCU transcoders by the BSC,
• SS7 going to the MSC.
Three types of GPRS signaling are transported over the Agprs interface:
• LAPD/OML for control of the PCUSN by the BSC,
• LAPD/RSL for the Radio Signaling Link,
• LAPD/GSL for the GPRS Radio Signaling Link.

3-4
BSC 3000 and TCU 3000 Generic Architecture
Control
OMU TMU
Node MMS OAM OMU
OAM

Private Traffic
Management

MMS
ATM SW
ATM SW TMU
Shared

Traffic
Management

Transcoding
Interface LSA RC Node
ATM RM LSA RC
Node TRM
ATM RM

CEM
CEM

8K RM CEM
CEM LSA RC
8K RM LSA RC TRM

411-1075A-001.1603
3-5 November, 2006
FOR TRAINING PURPOSES ONLY

The BSC 3000 is composed of the Control Node and the Interface Node.
The TCU 3000 is composed of the Transcoding Node.
Control Node main functionalities are:
• Management of OAM for the C-Node, I-Node and T-Node,
• Traffic management towards the BTSs and MSC,
• BTS supervision, Transcoding Node supervision,
• OMC-R link management,
• Failure detection and processing,
• HandOver procedures,
• BSS configuration and software management,
• BSS performance counter management,
• ATM Management.
Interface Node main functionalities are:
• I-Node OAM management,
• Switch management and Timeswitch control,
• PCM interface,
• ATM Management.
Transcoding Node main functionalities are:
• T-Node OAM management,
• Switch management and call processing,
• BSC Access,
• Carrier Maintenance.

3-5
BSC 3000: Control Node

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

ATM SW
ATM SW
- Filler -
Control

SIM B
OMU

OMU
TMU

TMU
TMU

TMU
TMU
TMU
TMU
Node Shelf 1

MMS Shared

MMS Shared
MMS Private
MMS Private
Shelf 0

- Filler -

SIM A
TMU
TMU
TMU
TMU

TMU
TMU
TMU
- Filler -
- Filler -
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

411-1075A-001.1603
3-6 November, 2006
FOR TRAINING PURPOSES ONLY

The Control Node is composed of the following modules:


• the Operation and Maintenance Unit (or OMU), which manages all BSC resources,
ensures BSC survival, BSS interface with the OMC-R and disk management,
• the Mass Memory Storage (or MMS), holds all the data:
— private: managed by one OMU,
— shared: managed by both OMUs, for data that must be secured and still
accessible in the event of an OMU or disk failure,
• the ATM Switch (or ATM SW), which implements the ATM network used as the
Control Node backplane, and provides ATM on OC-3 connectivity towards the
Interface Node,
• the Traffic Management Unit (or TMU), which provides the processing capability
required to perform the GSM/GPRS processing and protocol termination required for
GSM interfaces,
• the Shelf Interface Module (or SIM), which provides the power (-48 V) and alarm
interfaces for the sub-rack.
The maximum configuration for the Control Node is the following:
• 2 OMU modules,
• 14 TMU modules,
• 4 MMS modules, (2 private and 2 shared),
• 2 ATM SW modules,
• 2 SIM modules.

3-6
Control Node: ATM Platform

Control Interface
OAM OMU Node Node
OAM OMU
Plane 1 ATM RM
ATM SW

ATM/PCM
Interface
CEM
ATM Links
(25 Mbit/s) ATM Links S-links
(155 Mbit/s) 64 kbit/s
TMU
TMU ATM SW Plane 2 ATM RM
TMU
ATM/PCM
TMU
Interface

Traffic
Management

411-1075A-001.1603
3-7 November, 2006
FOR TRAINING PURPOSES ONLY

The Control Node is a computing and signaling platform built around an ATM
switch.
Globally, the Control Node is designed as a fully redundant ATM switch for any
inside and outside communications.
Internal and external exchanges are carried over ATM through a redundant optical
OC3 connection using ATM at 155 Mbps:
• for internal communication between the Control and the Interface Nodes.

The platform is also fully ATM inside; no ATM connection is ended on the access
port of the Control Node (ATM switches), but on any computing module inside the
shelf. The addressing to/from the Control Node is based on Vpi, Vci.
ATM RMs and ATM SW modules are provisioned in pairs to provide redundancy
and connection protection:
• both planes are used at the same time,
• all messages exchanged between ATM RMs and ATM SW modules are
duplicated.

3-7
Control Node Architecture
OAM OMU MMS OAM OMU
OMC-R OMC-R
MMS
MMS MMS

SCSI
SCSI
Passive Active Interface
Interface

ATM Links Control


ATM SW (25 Mbit/s) ATM SW Node
ink AT M L
A TM L ink
b it /s ) (155 M
(155 M bit/s)
ATM Links
(25 Mbit/s)

To TMU TMU TMU TMU To


Interface Interface
Node Node

Traffic Traffic Traffic Traffic


Management Management Management Management

411-1075A-001.1603
3-8 November, 2006
FOR TRAINING PURPOSES ONLY

The Control Node is composed of the following three functional modules:


• the Asynchronous Transfer Mode SWitch or ATM SW, which implements the
ATM network used as the Control Node backplane, and provides ATM on OC-
3 connectivity towards the Interface Node,
• the Operation and Maintenance Unit or OMU, which manages all BSC
resources, ensures BSC survival, BSS interface with the OMC-R and disk
management,
• the Traffic Management Unit or TMU, which provides the processing capability
required to perform the GSM treatments and protocol termination required for
the GSM interfaces. One TMU computes 300 Erl (whatever the subscribers
profile).
The Mass Memory Storage or MMS, is only a hard disk.

3-8
Operation and Maintenance Unit
1 - Overview
TMU OMU MMS

Disk
Traffic
Operation Management
Management
Administration
&
Maintenance

ATM SW
OMC-R
Control Interface
Node

OMC-R

Interface
Node
TMU
TML
TML
Interface
Traffic
Management

411-1075A-001.1603
3-9 November, 2006
FOR TRAINING PURPOSES ONLY

The Operation and Maintenance Unit module is responsible for the following
functions:
• management of all BSC resources (both Control and Interface Nodes),
• BSS interface with the OMC-R (Ethernet),
• OMC link management either by a physical serial link or constant bit rate data
sent to the ATM datalink,
• disk management,
• Local Maintenance Terminal (TML).
The OMU is provisioned in a 1+1 redundancy scheme.

3-9
Operation and Maintenance Unit
2 - OA&M Functions

BSC OMU
OA&M

Performance
Management
Configuration Fault
Management Management OMC-R

ATM SW ATM RM CEM Interface


Node
Control
Node 8K RM LSA RC PCUSN

TMU TMU
BTS BTS
CEM LSA RC
OA&M OA&M

PCUSN TCU PCUSN TCU TRM


OA&M OA&M OA&M OA&M
Traffic Traffic Transcoding
Management Management Node
411-1075A-001.1603
3-10 November, 2006
FOR TRAINING PURPOSES ONLY

The OA&M function is in charge of management and supervision of:


• internal BSC equipment,
• other BSS equipment: BTS, TCU, PCUSN.
The OA&M entity for BTS resources is mapped on the TMU (for direct Abis
access), as well as TCU and PCU supervision and OA&M.
For each resource, the OA&M ensures classical functions as:
• Configuration Management,
• Fault Management: detection, resolution, notification, correction,
• Performance Management: measurements,
• Upgrade Management,
• Test Management.
The OA&M is not a simple OMC-R agent on the BSC, it has its own decision
criteria to involve some actions after orders or observations:
• overload protection,
• switching of activity (swact) on module failure (fault tolerance),
• defense against applicative inconsistencies.

3-10
Operation and Maintenance Unit
3 - BSS Interface with OMC-R

BSC OMC

OAM OAM
RJ45

RS232
(Debug) Association Association
(proprietary) (proprietary)

RFC 1006 RFC 1006

TCP TCP

IP IP

Ethernet Ethernet

TCP/IP
Network

411-1075A-001.1603
3-11 November, 2006
FOR TRAINING PURPOSES ONLY

Though the same OMC-R manages both the BSC 2G and the BSC 3000, the
interface between the BSC 3000 and the OMC-R is Ethernet TCP/IP, instead of
X.25 as for the BSC 2G.
Two data paths are available for OMC-R access and/or other purposes:
• PCM: on one or more TS (DS0) via the LSA RC module, (available in V15)
• Ethernet: TCP/IP on Ethernet 10/100 Mbps.
The direct Ethernet connection is provided by the RJ45 connector of the OMU
faceplate.
A switching device or four-ports LAN Hub, located in the SAI, is required.
A small sub layer based on IETF RFC 1006, allows dialog with Association
(proprietary) and Application layers.
When the BSC 3000 is remote from the OMC-R, they can be interconnected
through a network (X.25, Frame relay, etc.) with a minimum throughput of
128 kbps.

3-11
Mass Memory Storage
1 - SCSI Bus
Control
Node
SCSI Bus
OAM OMU OAM OMU

Active Passive

MMS MMS MMS MMS

Private Shared Shared Private


Disk Disk Disk Disk

411-1075A-001.1603
3-12 November, 2006
FOR TRAINING PURPOSES ONLY

There are four Mass Memory Storage modules (hard disk) in the BSC.
They are linked to the OMU modules through four SCSI buses.
Two SCSI buses are dedicated to the two private disks storing:
• OS AIX (400 Mb),
• software for OMU boards.
Two of them are for mirrored shared disks storing:
• local MIB (BDA),
• observations, notifications,
• Call traces,
• Supervision,
• BTS and TCU softwares.
The pair of shared SCSI buses, and the disks on them are only managed by the
active OMU.
The shared SCSI buses will only be accessed after “election” of the active OMU.
When a switch of activity occurs (fault tolerance mechanism), the newly active
OMU gains control of the pair of shared SCSI buses.

3-12
Mass Memory Storage
2 - Disk Sub-system
Control
Node
1 SCSI Bus 2
OAM OMU OAM OMU

Active Passive

MMS MMS MMS MMS

Private Shared Shared Private


Disk Disk Disk Disk
Transactions when:
OMU-1 is active
OMU-2 is active
Always available
411-1075A-001.1603
3-13 November, 2006
FOR TRAINING PURPOSES ONLY

Each OMU module controls a private disk which holds all the private data (OS and
System data) for the module and a pair of shared disks (BSS database and GSM
data) managed in a mirroring way.
Each Mass Memory Storage module contains a SCSI-2 hard disk of 9 Gbytes
each.
At boot time, each OMU module has access to its private SCSI and so to its
private disk.
The pair of shared disks holds the data that must be secured and still be
accessible in the event of an OMU failure or a disk failure.
The protection of the shared disks is independent from the protection of the
OMUs: the non active OMU can be extracted from the system without any impact
on the disk transactions.
In the event of the extraction of the active OMU, a swact of the OMUs occurs, and
the disk subsystem is still protected from a single failure.

SWACT = SWitch of ACTivity. This refers to a sparing action where an inactive


board takes control over a faulty active board. In the Control Node, this applies to
the OMU and the TMU modules, in the Interface Node to the CEM and the LSA
RC.

3-13
Mass Memory Storage
3 - MMS2 Introduction
New disk
SCSI expander
(73 Gb from Hitachi)
Activation is slot
LVD SCSI LVD SCSI dependent
Terminator Terminator
End of SCSI bus

LVD SCSI SCSI


SCSI bus
disk Expander
80 pts SCA
Disk shut-down LVD SCSI Activation is slot
Expander isolation Terminator dependent

Backplane
LED drive

Remove request ITM


Block MTM bus

DC/DC
converter
-48 V

411-1075A-001.1603
3-14 November, 2006
FOR TRAINING PURPOSES ONLY

MMS2 HW presentation:
• Bigger capacity disk: 73 GB (vs 9 GB for MMS1)
• MMS2 boards are replacing MMS1 boards with the same functionality.

Mixed configurations MMS1 / MMS2 authorized:


• MMS2 module can be mixed with MMS1 module for private or shared MMS
• Upward compatibility but no backward compatibility:
— A MMS1 module can be replaced by a MMS2 module
— A MMS2 module already installed must not be replaced by a MMS1
module.

Operations with MMS2


• MMS2 boards can be introduced in a BSC e3 shelf in any of the place
available for the current MMS boards (private and shared). SW compatibility
with V16.

3-14
Mass Memory Storage
4 - MMS2 and Higher Capacity Disk
Front
Panel
View

Installed disk Replacement disk


9 Gb/ 36 Gb flanged 9 Gb/ 36 Gb flanged
73 Gb 73 Gb
9 Gb/ 36 Gb flanged 73 Gb

Removal Request
Push Button
73 Gb
X 9 Gb/ 36 Gb flanged

Only replacement of a disk by a disk of same or


higher capacity is supported

411-1075A-001.1603
3-15 November, 2006
FOR TRAINING PURPOSES ONLY

The current MMS1 module (9 GB) houses a SCSI hard disk. The new MMS2
disk, introduced in V15.1, is a 73 GB device and it uses the same SCSI interface
as the 9 GB disk.

3-15
Traffic Management Unit
1 - Main Functions
TMU

Traffic Management:
Radio resource: TMG_RAD
Connection (setup, release, HO): TMG_CNX
A interf. Messages (paging, incoming HO): TMG_MES
Agprs interf. Messages: TMG_RPP

BTS Site TCU 3000 / TCU 2G PCUSN Ater


supervision: supervision: supervision: Management:
SPR SUP-TCU / SPT SPP TMG_COM

SS7 Management:
LAPD Management:
SCCP
Level 1, 2 and 3
MTP1, MTP2, MTP3

411-1075A-001.1603
3-16 November, 2006
FOR TRAINING PURPOSES ONLY

The TMU is responsible for the BTS configuration and the main Call Processing functions:
• GSM/GPRS traffic management,
• GSM signaling (LAPD and SS7),
• GPRS signaling (LAPD).
These functions are processed by six software modules.
TMG_RAD:
• manages radio resources for a group of sites: allocation, modification and release of
radio channels,
• manages the RSL dialog on the Abis and radio interfaces,
• supervises coherence of allocated channels between the BTSs and the BSC.
TMG_CNX:
• drives setup, release, assignment and handover,
• asks for traffic connections.
TMG_MES:
• codes/decodes A interface messages,
• drives connectionless messages: paging, incoming HO.
TMG_RPP: codes/decodes Agprs interface messages.
TMG_COM: allocation, release and administration of terrestrial circuits (CICs).
SPR: Supervision of BTS sites (configuration and defense).
For reliability purpose, the main Call Processing sub-functions use the Fault Tolerance
service: for each sub-function there is one active entity on a TMU and one passive entity
on another TMU.

3-16
Traffic Management Unit
2 - Call Processing and Traffic Management

Call Processing
Resource Allocator

MSC BSC Radio


A connection transaction connection Abis
SCCP Distr. Distr. LAPD
Layer Layer
Transparent Message Transfer

411-1075A-001.1603
3-17 November, 2006
FOR TRAINING PURPOSES ONLY

The Traffic Management Unit (TMU) is responsible for managing the GSM
protocols in a large acceptance:
• provide processing power for GSM Call Processing,
• terminate GSM protocols (A, Abis and Ater interfaces),
• terminate low level GSM protocols (LAPD and SS7).
The GSM Call Processing function is responsible for the management of GSM
communications:
• traffic management (connections and transfer of user information MS/MSC),
• network resource allocation (terrestrial circuits and radio resources),
• handover,
• radio measurements,
• power control.
The corresponding software is spread over all TMU modules, but is split into
several entities:
• radio resource allocation: per BTS site,
• terrestrial circuit allocation: per TCU and per PCUSN,
• MSC connection and BSC transaction: internal criteria.

3-17
Traffic Management Unit
3 - TMU2 Introduction

TMU2 HW Presentation:
Flash 2 MB One Single board (TM+SBC+PMC).
8 MB SSRAM NTQE04BA

Optional TMU2 Capacity:


ITM MTM
PMC slot MIM 525 Erlang
MPC8560 120 Lapd & 4 SS7
512 MB TMU2=1.75TMU1
Core @ 833 MHz DDR333
CPM @ 333 MHz SDRAM
TMU2 Technical spec:
Optional Based on PQ3 processor (MPC8560)
PMC slot 512 Mbytes RAM
TDM PHY ATM ATM
clock CPLD 77V106 25,6Mbps
PLL UTOPIA PHY ATM
Or TMU2 Compatibility:
Level 18 bit 51,2 Mbps
MT9043 77V106 Mixed Configuration TMU/TMU2 allowed
No specific operation at introduction
8 KHz
SW Compatibility V16

411-1075A-001.1603
3-18 November, 2006
FOR TRAINING PURPOSES ONLY

The distinction between TMU1 and TMU2 will be made thanks to a different
PecCode value.

TMU2 is a mono-processor board, based on a Freescale PowerQuicc III using 512


MB SDRAM. It will be in charge of all tasks previously performed by the 3 TMU1
processors (SBC, PMC and TM sub-boards).
120 LAPD and 4 SS7 ports are available for signaling channels.

TMU1: PecCode = NTQE04AA


TMU2: PecCode = NTQE04BA

3-18
ATM Subsystem
ATM 25 Interface Distribution
Control
Node
ATM Links (25 Mbit/s)
ATM SW ATM SW

ATM ATM
Switch Switch

Active Active

OAM OMU OAM OMU TMU TMU

Traffic Traffic
Active Management Management
Passive

411-1075A-001.1603
3-19 November, 2006
FOR TRAINING PURPOSES ONLY

The Control Node uses a duplex, star connectivity, with cell switching performed in
both ATM SW modules at the center of the stars and the other Resource Modules
at the leaves.
From a hardware perspective, the ATM subsystem is the key factor for platform
robustness and scalability.
This subsystem provides reliable backplane board interconnections with live
insertion capabilities. It has two main components:
• a pair of ATM switches (ATM SW module), working simultaneously,
• an ATM Adapter, located in each of the OMU and TMU modules.
The connections between modules use redundant ATM 25 point to point
connections to ATM switches, allowing:
• high fault isolation, signal integrity,
• live insertion,
• backplane redundancy,
• scalability.
The backplane supports a redundant ATM 25 Mbps to any slot using the ATM 25
standard as defined by the ATM Forum.
It carries all the internal signaling information, using the AAL1 and AAL5 protocols.

3-19
ATM Switch Module
1 - Functions
ATM SW
AAL5- AAL1 Messaging
MUX
SAR Communication
2x
ATM Links ATM routing table
(25 Mbit/s)
OA&M
OA&M OMU
OA&M OMU
4x MUX
ATM Links
(25 Mbit/s) ATM
TMU ATM Links Physical Interface
(155 Mbit/s) ATM Link UTOPIA
(155 Mbit/s)
TMU 6x
ATM Links MUX
(25 Mbit/s) OC3 Link
TMU SONET
ATM (155 Mbit/s)
TMU Switch
6x
ATM Links MUX
(25 Mbit/s) Optical
Traffic interface
Management ATM SW

411-1075A-001.1603
3-20 November, 2006
FOR TRAINING PURPOSES ONLY

The ATM SW module provides a high performance interconnection between the


OMU and TMU modules, as well as the ATM on OC-3 connectivity towards the
Interface Node.
Messaging provides a generic API to all entities which exchange messages to one
another, using the UDP service only.
Communication is in charge of all communication tasks:
• between processors of the module (TCP/IP),
• file transfer with the OMU module (TCP/IP and FTP).
The ATM routing table management configures on startup and allows modification
of the routing table at run-time for AAL1 and AAL5.
The OA&M Local Agent centralizes all administrative action relative to the module:
• configuration,
• performance measurement,
• fault notification.
The ATM SW is provisioned in a 1+1 active/active scheme, with both modules
working simultaneously.

3-20
ATM Switch Module
2 - ATM Switching Principle

Cell 1 Cell 2 ATM


1 8 6 4 Switch 4 5

Port 1 Port 2
Port 3

2 9
Switching Table

Input Output
Port VPI VCI Port VPI VCI

1 1 8 2 4 5
VC/VP ATM Switch: Input(Port, VPI, VCI) ® Output(Port, VPI, VCI)
1 6 4 3 2 9 VP ATM Switch: Input(Port, VPI) ® Output(Port; VPI)
.
.
.

411-1075A-001.1603
3-21 November, 2006
FOR TRAINING PURPOSES ONLY

ATM switching consists first in establishing a virtual circuit for each communication
using a virtual channel or VC and a virtual path or VP.
These virtual circuits are established statically according to engineering rules, they
are Permanent Virtual Circuits or PVCs.
The main function of an ATM switch is to receive cells on a port and to switch
those cells to the proper output port based on the VPI and VCI values of the cell.
This switching is controlled by a switching table that maps input ports to output
ports based on the values of the VPI and VCI fields.
While the cells are switched through the switching fabric, their header values are
also translated from the incoming value to the outgoing value.
Addressing tables converting between VP/VC and slot number are loaded from
ATM SW module at startup time and stored in the flash EPROM of the ATM part
of all modules:
• AAL1 routing tables are dynamic,
• AAL5 routing tables are static.

3-21
BSC 3000: Interface Node

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

1 2 3
LSA LSA LSA

- Filler -
- Filler -

- Filler -
Shelf 1 RC RC RC

SIM A
ATM RM
ATM RM
5 0 4
Shelf 0 LSA LSA LSA

- Filler -
8k RM 0
8k RM 1
RC RC RC

CEM 0
CEM 1

SIM B
Interface
Node

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Mandatory
for
synchronization

411-1075A-001.1603
3-22 November, 2006
FOR TRAINING PURPOSES ONLY

The Interface Node is connected to the Control Node by four optical fiber cables
with a standard ATM interface.
There are four major hardware modules that make up the Interface Node:
• the Common Equipment Module (or CEM),
• the 8K subrate matrix Resource Module (or 8K RM),
• the Low Speed Access Resource Complex module (or LSA RC),
• the Asynchronous Transfer Mode Resource Module (or ATM RM).
The maximum configuration for the Interface Node is the following:
• six LSA RC modules,
• two ATM RM modules,
• two 8K-RM modules,
• two CEM modules,
• two SIM modules.
The CEMs have special slots (slots 7 and 8, in shelf 0), and both are always
provisioned.
The LSA-RC module 0 is mandatory, as the Interface Node is synchronized
through the PCMs of this slot (synchronizing PCMs 0-1-2-3-4-5).

3-22
Interface Node
1 - General Architecture Control
Node

Interface
Node
ATM RM

ATM/Slink
Interface

S-links

Switching Unit
Ater LSA RC
TCU CEM 8K RM
Agprs
PCUSN PCM
Controller
Abis S-links
BTS 64 kbit/s 8 kbit/s

411-1075A-001.1603
3-23 November, 2006
FOR TRAINING PURPOSES ONLY

The Interface Node is composed of a controller (CEM) and a set of resource


modules (RM) that are connected point-to-point to the CEM through the back-
panel and communicate via a proprietary communication protocol called “S-link”.
The main function of the modules are the following:
The Common Equipment Module (or CEM) controls the BSC Interface Node
Resource Modules, and provides:
• system maintenance,
• clock synchronization,
• traffic switching.
The Asynchronous Transfer Mode Resource Module (or ATM RM) adapts Time
Slot (DS0) based voice and data channels of S-links to ATM cells for transmission
over a Synchronous Optical NETwork (SONET), OC-3c interface.
The 8K subrate matrix Resource Module (or 8K-RM) adds subrate switching
capability to the Interface Node, as the CEM is only capable of switching at a TS
(DS0) level (64 kbps circuits).
The Low Speed Access Resource Complex or LSA RC is used to interface the
BSC to both TCU and BTS using PCM links (E1 or T1).

3-23
Interface Node
2 - Detailed Architecture
Control Node Control Node

ATM RM Interface ATM RM


Node
ATM/Slink ATM/Slink
Interface Interface
S-links

Switching Unit
IMC links
CEM CEM

8K RM 8K RM
64 kbit/s 64 kbit/s

8 kbit/s 8 kbit/s
Active DS512 Passive
Passive Active DS512

LSA RC LSA RC
BTS, S-links BTS,
TCU, PCM PCM
TCU,
PCUSN Controller Controller PCUSN

411-1075A-001.1603
3-24 November, 2006
FOR TRAINING PURPOSES ONLY

The Interface Node architecture is based on a duplicated Common Equipment


Module or CEM.
Other modules: ATM-RM, LSA-RC and 8K-RM are connected to the CEMs via
proprietary PCM serial links (S-links).
The active CEM sources all PCM streams leaving the CEM complex.
Both CEMs receive identical PCM traffic from all sources.
The two CEMs can communicate with each other via the Inter Module
Communication (IMC) links, in order to synchronize Call Processing and
maintenance states.
This results in a point-to-point architecture, which (when compared to bus
architectures) provides:
• superior fault containment and isolation properties,
• fewer signal integrity related problems,
• easier backplane signal routing.
In addition to payload TSs (DS0), the S-links transport messaging channels,
overhead control and status bits between the CEMs and the RMs.

3-24
ATM RM
Logical Architecture
Interface
Node

ATM RM
Convergence Segmentation
Sublayer And
Reassembly ATM/Slink
sublayer Interface
S-links
PCM DS0 AAL1
mapper (LAPD, SS7)
Physical
ATM layer Control
layer OC-3 Node
S-links Interface
ATM Links
PCM SPM AAL5
messaging (OA&M, CallP) (155 Mbit/s)

411-1075A-001.1603
3-25 November, 2006
FOR TRAINING PURPOSES ONLY

The main functions of the ATM-RM are:


• terminating the OC3 optical interface using a single mode fiber,
• mapping TSs (DS0) from the six S-links, to ATM cells using AAL1 in Structured
Data Transfer mode,
• relaying the contents of AAL5 cells to the CEM (BSC OA&M and Call
Processing).
The ATM treatment is composed of three layers:
• AAL layer (Convergence Sublayer and SAR Sublayer),
• ATM layer,
• Physical layer (OC3 interface).
The ATM RM is configured statically to associate one VP/VC, corresponding to a
channel on the Control Node side, to one TS of the S-links to the CEM.
The ATM RMs are provisioned in pairs to provide redundancy and connection
protection. Both modules are used at the same time and the messages are
duplicated.

3-25
Switching Unit
1 - Common Equipment Module
Interface
CEM
Switch Manager Node
(Call Processing)
8K Integrated
Connection 8K RM
Manager
64K
Connection 8 kbit/s
ATM RM Manager

ATM/Slink
Interface

Interface
Node
OA&M
LSA RC
LSA RC
Switching
Matrix 64K PCM
PCM
Controller
Controller PCM Clock

411-1075A-001.1603
3-26 November, 2006
FOR TRAINING PURPOSES ONLY

The Common Equipment Module is the main module of the Interface Node.
The CEM handles the following functions:
• channel connection management, (traffic switching),
• controls the Resource Modules of the Interface Node (downloading, testing,
configuring),
• provides system maintenance, using the TML,
• clock synchronization,
• alarm processing.
The main function of the Switch Manager, is to establish, release and modify
Abis/Ater connections in the Switching Matrix (switch fabric), under the control of
Call Processing (TMU).
Its other function is to establish 64 kbps connections for signaling links.
The switch fabrics are updated on both CEMs to ensure consistency between
them.
The CEM is provisioned in a 1+1 hot stand-by redundancy scheme.
One CEM is active, i.e. actually performing Call Processing functions, while the
other is inactive, ready to take over if the active module fails.
The messages between the IN-OA&M application (OMU) and the CEM are
exchanged using the IP protocol over AAL5 ATM circuits. The IN O&M application
handles only IP addresses and TCP/UDP ports.

3-26
Switching Unit
2 - Common Equipment Module and 8K RM

CEM

Switching Unit
LSA RC
CEM Primary 8K RM
PCM
Controller
64 kbit/s 8 kbit/s

LSA RC

PCM
Controller

411-1075A-001.1603
3-27 November, 2006
FOR TRAINING PURPOSES ONLY

The switching unit manages all the flow of connections sent by the Call Processing
and BTS OA&M applications from the Control Node (TMU).
The Integrated Control Manager or ICM software of the CEM is responsible for
establishing connections between bearer channels, using a two stage matrix.
The switching unit is composed of two types of module:
• the Common Equipment Module or CEM offers a 64 kbps matrix (switch fabric)
only capable of switching at TS (DS0) level,
• the 8K RM is a subrate matrix Resource Module which provides a secondary
stage of switching individuals bits within each TS.
Internal dialog between CEM and other modules (LSA RC and 8K RM) is carried
out by reserved TSs (30 to 40) of the Primary S-link.

3-27
Switching Unit
3 - 8K-RM (SRT)
8K RM

Messaging
Channel
Interface
Sequencer
CEM Clock

Passive
CEM Primary

Active

64 kbit/s

S-link
Interf.

Main Bus Switching


Matrix 8K
2268 TSs

411-1075A-001.1603
3-28 November, 2006
FOR TRAINING PURPOSES ONLY

The 8K RM is used for the bearer channels that have to be switched by the
Interface Node, between the Abis interface and the Ater interface.
The 8K RM or Subrate Matrix is a 4096 bit-to-bit switch, which can communicate
with the two CEMs via:
• nine S-links, connected to the backplane.
The 8K RM is provisioned in a 1+1 (active/active) redundancy scheme.
The active CEM module controls the switching activity of the two 8K RM modules,
using the 36 reserved TSs of the Primary S-link:
• switch messaging (30 TS),
• synchronization (6 TS).
The S-link Interface extracts messaging for communication with CEMs and
generates the reference clock.
The Channel Sequencer performs rate adaptation and channel selection.
The Switching Matrix performs channel switching at an 8 kHz frame rate, using an
eight-bit matrix, working in parallel. The fanout is limited to 2268 Time Slots
(payload).

3-28
Switching Unit Internal Switching Connections
CEM S-Links
4 - DS512 8K RM

CEM 64 kbit/s 8 kbit/s

New

DS512

0 1
External 1
Switching Connections
411-1075A-001.1603
3-29 November, 2006
FOR TRAINING PURPOSES ONLY

In V14., the BSC 3000 can switch up to 2268 DS0 on Abis, Agprs and Ater
Interfaces.
With the introduction of EDGE, this switching capacity needs to be increased, in
order not to become a limiting factor.
To increase the BSC 3000 switching capacity, four DS512 links (optical fibers) are
established, between CEM and 8K-RM module.
With this connection, the BSC 3000 DS0 capacity increases from 2268 DS up to
4056 DS0.

3-29
Switching Unit
5 - Internal S-Link Connection
9 S-links = 256 x 9 = 2304 Time Slots
Control Interface Node
Node CEM
ATM RM S-link 8K RM

ATM/Slink S-link
Interface 9 S-links
S-link
8 kbit/s
64 kbit/s

LSA RC S-link S-link


LSA RC

PCM S-link S-link


Switching PCM
Controller Matrix 64K Controller
S-link S-link

BTS S-link TCU,


PCUSN
LSA RC

PCM S-links = 256 Time Slots or DS0 (64 kbps)


Controller

411-1075A-001.1603
3-30 November, 2006
FOR TRAINING PURPOSES ONLY

As the 8K RM needs nine S-links to the CEM it has a fixed position into the
Interface Node shelf no. 0.
Whereas the LSA-RC and ATM RM only need three S-links for the back panel
connection.
Each S-link provides 256 Time Slots.

3-30
Switching Unit
6 - Switching LAPD and SS7 Time Slots
Interface
Node
CEM
TSc1 LSA RC
ATM RM
TSa1 TSc1 PCM
ATM/Slink TSb1 Controller
Interface

TSa1 LSA RC

PCM
ATM RM 64 kbit/s
Controller
TSa2 TSc2
ATM/Slink
Interface TSb2
TSb1 LSA RC

PCM
Controller

411-1075A-001.1603
3-31 November, 2006
FOR TRAINING PURPOSES ONLY

LAPD and SS7 messages arriving in AAL1 cells on both ATM modules are carried
on two separate S-links to the CEMs.
A Y-connection connects the two identical TSs to the required LSA module:
• in the ATM RM to LSA-RC direction, only the TS of the active plane is
switched,
• in the LSA-RC to ATM RM direction, the TS is broadcast to both S-links.
S-links used for signaling are called Primary S-links.

3-31
Low Speed Access Resource Complex
1 - Functions

LSA RC
IEM
CEM
Transcoding TIM
PCM
Mapper NRZ
Framer HDB3
64 kbit/s /B8ZS

HDLC Passive
Controller
S-links IEM PCM
Selection (E1 or T1)
CEM IEM
Transcoding
PCM
64 kbit/s Mapper NRZ
Framer HDB3
/B8ZS

HDLC Active
Controller

411-1075A-001.1603
3-32 November, 2006
FOR TRAINING PURPOSES ONLY

The Low Speed Access Resource Complex or LSA-RC is used to interface the
BSC to the TCU, the PCU and the BTS.
The LSA-RC is the PCM interface module.
It is called “Resource Complex”, as it is made of three modules (taking three
slots):
• two Interface Electronic Modules (or IEM), they are in 1+1 hot stand-by
redundancy (field replaceable without service disruption),
• one Terminal Interface Module (or TIM), it is a passive switch that switches the
PCM towards the active IEM. The TIM does not contain electronic components
(very high MTBF) and provides LSA internal redundancy.
Main IEM functions:
• the S-Link Mapper is responsible for transferring payload data between the
channels on the S-Link interface and the respective channels of the
PCM30/DS1 Link interface,
• transcoding converts signals from NRZ to HDB3 (or B8ZS),
• the HDLC controller is used for LAPD level 2 treatment (only used in the TCU).
The BSC Interface Node can contain up to six LSA-RC modules to provide 126
PCM30 or 168 DS-1.

3-32
Low Speed Access Resource Complex
2 - Physical Architecture
RC Mini Spectrum
Backplane Backplane

IEM RCM IEM


TIM
IEM

PCM

To
CTMx
TIM (CTU)
Backplane

PCM

IEM
PCM

411-1075A-001.1603
3-33 November, 2006
FOR TRAINING PURPOSES ONLY

The LSA-RC module is made up of three modules (taking three slots):


• two Interface Electronic Module or IEM, that are in 1+1 hot standby
redundancy and field replaceable without service disruption,
• one Terminal Interface Module or TIM.
These three modules are connected on a specific backplane called Resource
Complex Mini-backplane or RCM.
The RCM is designed for both IEM and TIM modules. It provides:
• Interface for 21 PCM E1 or 28 PCM T1,
• Matched impedance for 120 Ω or 75 Ω for E1 PCM, and 100 Ω for T1 PCM.

3-33
Low Speed Access Resource Complex
3 - LSA RC Front Panel

Problem with IEM module


IEM module operating and not to be removed

Indicates the one IEM module serving as synchronization reference

The red indicators indicates a fault condition on the span in the PCM (span) display:
Loss Of Signal
Alarm Indication Signal
Loss of Frame Alignment – (Loss Of Frame Alignment)
Remote Alarm Indication
PCM Number with problem and type of problem
To search the next PCM in fault

To search the previous PCM in fault

411-1075A-001.1603
3-34 November, 2006
FOR TRAINING PURPOSES ONLY

The LSA-RC is the PCM (or SPAN) interface module.


The Spans can be checked in Automatic mode or in Manual mode, selected by the
STOP pushbutton.
In Manual mode the SPAN number is selected using the two up and down
pushbuttons.
The seven red indicators, indicate for the selected SPAN, the following faults:
• LOS: Loss Of Signal,
• AIS: Alarm Indication Signal,
• LFA: Loss of signal Frame Alignment, With T1 IEM the indication is LOF:
Loss Of Frame Alignment
• RAI: Remote Alarm Indicator.
The BSC 3000 uses the first PCM ports from LSA logical No. 0 (the LSA in slots
[4,5,6] shelf 0) as synchronizing PCMs.
By default the synchronizing ports are:
• No. 0, 1, 2, 3, 4, 5 for E1 PCMS,
• No. 0, 1, 2, 3, 4, 5 for T1 PCMs.

3-34
Low Speed Access Resource Complex
4 - External Connection

Cable Transition Unit


CTB
CTMx7
LSA RC module CTMx6
CTMx5
CTMx4
CTMx3
IEM RCM CTMx2
CTMx1
Active TIM
Tx cables

IEM
Rx cables
Passive

411-1075A-001.1603
3-35 November, 2006
FOR TRAINING PURPOSES ONLY

Two versions of the LSA-RC module exist:


• for International PCM30: 21 E1 PCMs, HDB3 coding,
• for North American DS-1: 28 T1 PCMs, AMI or B8ZS coding.
Each LSA is associated with a CTU (Cable Termination Unit). The CTU is housed
in the PCM cabling interface (so-called Service Area Interface) and provides
copper concentration.

The SAI is a cabinet, attached to the BSC frame, enabling front access to the
PCM cabling. It can host up to six CTUs (plus two optional HUBs) in the BSCSAI
and eight CTUs in the TCU SAI.

3-35
Low Speed Access Resource Complex
5 - IEM / IEM2
LSA RC module

IEM RCM

Active TIM LSA RC module

IEM2
IEM2 RCM
Passive
Active TIM

IEM2

Passive

411-1075A-001.1603
3-36 November, 2006
FOR TRAINING PURPOSES ONLY

The Interface Electronics Module (IEM) is a component of the Low Speed Access
(LSA) module. It is the electronic interface for E1 or T1 PCMs. Two IEM instances
are associated to each LSA, duplicated in a 1+1 protection scheme.
LSA modules are located either in BSC 3000 Interface Node or in TCU 3000.

This IEM2 evolution is part of the normal life cycle management of the BSC/TCU
3000 H/W modules.

Mixed configurations of IEM1 and IEM2 modules are allowed on the same shelf.
Therefore, an LSA may be equipped with two IEM1 modules, with two IEM2
modules, or with one IEM1 module and one IEM2 module.

3-36
TCU 3000: Transcoding Node
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

1 2 3
Shelf 1 LSA T T LSA LSA T S
RC RC RC

Filler

Filler
R R R I

Transcoding M M M M
Node

Shelf 0
0 C C
T T T LSA E E T T T T T T S
RC M M R R R R R R I
R R R

M M M M M M M M M M
Transcoding
Node

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Mandatory
for
synchronization

411-1075A-001.1603
3-37 November, 2006
FOR TRAINING PURPOSES ONLY

The TCU 3000 is based on the Spectrum architecture, as is the Interface Node of
the BSC 3000.
One TCU 3000 cabinet consists in:
• two independent Transcoding Nodes (one sub-rack),
• one cabling interface area or SAI, which provides front access to the PCM
cabling.
Each sub-rack supports twenty-eight modules (or slices) and two power interface
modules or SIMs.
The TCU 3000 uses the last PCM ports from LSA logical No. 0 (the LSA in slots
[4,5,6] shelf 0) as synchronizing PCMs.
By default the synchronizing ports are:
• No. 15, 16, 17, 18, 19, 20 for E1 PCMS,
• No. 22, 23, 24, 25, 26, 27 for T1 PCMs.

3-37
TCU 3000 Transcoding Node
Transcoding
TRM TRM Node
Up to 12
TRMs
Vocoders Vocoders

S-links

CEM CEM

Passive Active
64 kbit/s 64 kbit/s
IMC
Up to 4 links
LSA-RC modules

LSA RC S-links LSA RC


BSC BSC
PCM PCM
MSC MSC
Controller Controller

411-1075A-001.1603
3-38 November, 2006
FOR TRAINING PURPOSES ONLY

The Transcoding Node (or TCU) is composed of a controller (CEM) and a set of
Resource Modules (RM) connected point-to-point to the CEM via S-links, through
the backpanel.

One TCU is composed of the following physical entities:


• two Common Equipment Modules (or CEM), identical to the BSC CEM
module, including:
— an OA&M processor,
— a 16 x 16 PCM link (32 TS) switching matrix,
— a circuit used to synchronize the time base on the clock taken from three
of the PCM links connected to the MSC,
• up to twelve Transcoder Resource Modules (or TRM) which enable voice
coding/decoding for Full Rate, Enhanced Full Rate and AMR traffic channels,
• up to four Low Speed Access RC modules (LSA-RC) which:
— are identical to BSC LSA RC modules,
— can manage up to 21 external E1 PCM (or 28 T1) links each.

3-38
Common Equipment Module
1 - Signaling Processing

Transcoding
Node
Ater A
interface interface
LSA RC CEM LSA RC

SS7 TS
SS7 TS
PCM 64 kbit/s PCM MSC
BSC
LAPD Controller Controller
PCM links
PCM links
HDLC Call HDLC
Controller Processing Controller

411-1075A-001.1603
3-39 November, 2006
FOR TRAINING PURPOSES ONLY

LAPD links established between the TCU and the BSC (on the Ater) are used for
both OA&M and Call Processing functions located on the CEM:
• OA&M: management of the TCU under the control of the BSC:
— downloading and configuration, from BSC local disk,
— supervision: event reports are sent to the OMC-R through BSC.
• Call Processing: specific treatments performed by the TCU for each call, are
initiated by the BSC:
— choice of the voice algorithm,
— Ater and A Time Slots to be used.
LAPD links are:
• switched by the switching matrix of the CEM, coming from ATM-SW,
• processed by the HDLC Controller (up to four links) located on an LSA-RC
module,
• carried on the Ater PCM TSs:
— Call Processing: one TS per LSA-RC module,
— O&M: one TS per TCU node.
SS7 Time Slots are simply switched through the switching matrix without
transcoding process.

3-39
Common Equipment Module
2 - Information Switching and Processing

Transcoding
Node

Speech or Data Switching


TRM
a a
BSC CEM Vocoders

64 kbit/s
4321 4321
MSC

411-1075A-001.1603
3-40 November, 2006
FOR TRAINING PURPOSES ONLY

When the TCU 3000 receives a command to establish a communication of a given


type on a given A interface circuit, it performs the connection between the A
interface circuit, the appropriate transcoding resource and the Ater interface
circuit.
Thanks to this capability, it is not necessary for the MSC to manage A interface
circuit pools.
Speech flow carried on Time Slots is transcoded by TRM module voice coders so
called vocoders.
Each concentrated TS (a) to/from BSC is processed by the TRM module.
Data flow are only adapted from 8 or 16 kbit/s to 64 kbit/s.
Each processed TS (1), (2), (3), (4) is switched by the switching matrix of the CEM
module to/from the MSC on A interface.

3-40
Transcoder Resource Module
TRM

Archipelago = 3 Islands SPU

Island Island Island P SPU


Processor MLB = = = P Vocoders
5 DSPs 5 DSPs 5 DSPs
Power QUICC U SPU DSPs

SPU

FR, EFR , AMR or TTY

Frame synchronization
Archipelago = 3 Islands Handovers ….

FR, EFR , AMR or TTY


S-Links DSP: Digital Signal Processor
Interface
MLB: MaiL Box
Archipelago = 3 Islands
PPU: Pre-Processing Unit
SPU: Signal Processing Unit
FR, EFR , AMR or TTY

411-1075A-001.1603
3-41 November, 2006
FOR TRAINING PURPOSES ONLY

The Transcoder Resource Module or TRM, performs the GSM transcoding function. The
TRM supports 216 vocoders:
• Full Rate, Enhanced Full Rate (EFR) and AMR, voice coding/decoding,
• up to 14.4 kbit/s data rate.
A TRM contains one Processor (Motorola Power QUICC) and 45 DSPs (Motorola DSP
311), organized in three identical archipelagos, each of which can be assigned
dynamically to a particular type of vocoder: FR, EFR, and AMR (from V14).
Each archipelago is made of one MaiL Box DSP and three DSP islands.
Each island consists of five DSPs:
• 1 PPU (Pre-Processing Unit) DSP managing frame synchronization, handovers, etc.,
• 4 SPU (Signal Processing Unit) DSPs managing the vocoding (six vocoders).
The TRM is provisioned in an N+1 load sharing redundancy scheme.
A TCU 3000 sub-rack (Transcoding Node) can contain up to 12 TRM modules.
The allocation of the vocoders, based on a dynamic process, is the result of a real-time
adjustment, starting at the initialization of the TCU.
When there are two or more types of vocoder to manage, the operator has to define for
each TCU 3000 node the minimum capacity associated with each type of vocoder, in
terms of number of communications to process.
During this process, the TCU may have to modify the initial partitioning, in order to satisfy
a larger number of requests than planned for a specific coder.
If the operator wants the TCU 3000 to perform dynamic resource allocation, he needs to
configure the minimum required capacity for each vocoder so as to leave some
transcoding resources in the “free pool”.

3-41
Transcoder Resource Module
2 - TRM2 Introduction

SLIFS TRM2
FLASH

POWER JEDI JTAG ITM Clock Drivers


SDRAM

SDRAM
CTRL POWER
BILL SPU SPU SPU SPU

QUICC PPU

SPU SPU SPU SPU


LHP BUS

MLB PPU
TDM BUS

SPU SPU SPU SPU


DIA PPU

ARCHIPELAGO 1

ARCHIPELAGO 2

ARCHIPELAGO 3

common archipelago
411-1075A-001.1603
3-42 November, 2006
FOR TRAINING PURPOSES ONLY

TRM2 HW presentation:
• The TRM2 board is composed of 3 archipelagoes. Each one will be dedicated
to a codec type (FR, EFR, AMR, EFR_TTY)
• NTQE08BA for TRM2

TRM2 capacity = 33-60% more than TRM capacity

Operations with TRM2


• TRM2 boards can be introduced in a TCU 3000 shelf in any of the places
available for the current TRM boards.
• SW Compatibility V16 & above.
• Mixed configurations with TRM1 and TRM2 in one TCU 3000 is authorized.
• TRM2 is ROHS compliant.

3-42
Internal PCM S-Link Allocation
Transcoding
TRM S-link CEM Node

Vocoders S-link
S-link LSA RC
S-link PCM
S-link
Controller
64 kbit/s S-link

TRM S-link S-link LSA RC

Vocoders S-link Switching S-link PCM


Matrix 64K Controller MSC
S-link S-link

S-link

LSA RC

PCM
BSC Controller

411-1075A-001.1603
3-43 November, 2006
FOR TRAINING PURPOSES ONLY

Any board of the Transcoding Node uses a three S-links connection.

3-43
Maintenance Trunk Module Bus

BSC 3000 TCU 3000

411-1075A-001.1603
3-44 November, 2006
FOR TRAINING PURPOSES ONLY

The Maintenance Trunk Module bus runs along the backplane.


The MTM bus is a five wire multi-drop bus used to ease communication of test
and maintenance commands or data between a system test/maintenance control
module and up to 250 modules.
Only one module, (the active OMU for the BSC), is assigned mastership of the
bus, and is responsible for conducting MTM bus transactions.
All other modules within the system are slave to the test bus, but have the
capability of initiating communication to the master through the MTM bus interrupt
capabilities.
The ITM ASIC in direct control of the MTM bus and located on each transition
module can be configured to operate as either an MTM bus master or a MTM bus
slave interface device.
Its outputs to the backplane are “open drain type”, so that failure of a power supply
does not jeopardize the integrity of the whole bus.

3-44
Shelf Interface Module
Power Distribution
Shelf

Back Module
SIM A plane

PCIU
A Feed
+3.3 V
PUPS

1
SIM B
0 Module
B Feed PUPS
+3.3 V

411-1075A-001.1603
3-45 November, 2006
FOR TRAINING PURPOSES ONLY

Each shelf has two Shelf Interface Modules, but one can supply all the modules
(28).
The two SIMs provide for the shelf:
• power supply (-48 V) EMI filtered,
• power switching (30 A) with soft start circuitry,
• CEM/PCIU alarm interfaces,
• craftsperson access.
In the case where a SIM module needs to be extracted (repair or upgrade), it is
necessary to switch off the module and to disconnect the power feed on the
faceplate.

3-45
Service Area Interface
1 - Overview
Service
Area
Interface
LSA RC module
CTU
Cable Transition Unit RC Mini backplane
CTU CTB
CTMx7 IEM IEM
CTU CTMx6
CTMx5
Active Passive
CTU CTMx4
CTMx3
CTU CTMx2
CTMx1
CTU

TIM
CTU Tx cables
Rx cables
CTU

411-1075A-001.1603
3-46 November, 2006
FOR TRAINING PURPOSES ONLY

The Service Area Interface comprises seven CTU (Cable Termination Unit) modules
which provide the physical interface between the LSA RC modules and the customer’s
spans.
Each CTU is associated with one LSA RC module and includes:
• one CTB (Cable Transition Board), equipped to mate the backplane with seven
CTMx,
• seven CTMx (Cable Transition Module) which provide the following functions:
— terminate the cables that connect to the TIM board (LSA-RC) via CTB,
— provides connectors for terminating customer A and Ater PCMs,
— provide secondary surge protection, manual loopback switches, and passive
electronics for impedance matching for PCM30 Coax connections.
The CTMx is available in three styles:
• CTMC (PCM30 Coax) which provides three E1 PCMs,
• CTMP (PCM30 twisted Pair) which provides three E1 PCMs,
• CTMD (DS-1 twisted pair) which provides four T1 PCMs.
The CTU numbering, and the linking between the LSA and the CTU must respect the
following principles:
• The operator must easily find the CTU corresponding to a LSA, in order to connect the
LSA to the CTU.
• The operator must find the CTU associated to a LSA when the LSA is displaying span
error on its faceplate (connection/loopback operation of the corresponding CTM).

3-46
Service Area Interface
2 - CTU Connection

18 19 20 24 25 26 27

15 16 17 20 21 22 23

12 13 14 16 17 18 19

9 10 11 12 13 14 15

6 7 8 8 9 10 11

3 4 5 4 5 6 7

0 1 2 0 1 2 3

Port Number on E1 - CTU Port Number on T1 - CTU

411-1075A-001.1603
3-47 November, 2006
FOR TRAINING PURPOSES ONLY

On a BSC 3000 the PCM numbering is the following:


LSA-RC number * 21 + (CTU Port Number) for E1 PCMs
LSA-RC number * 28 + (CTU Port Number) for T1 PCMs
On a TCU 3000 the PMC numbering towards A Interface is:
LSA-RC number * 21 + (20 – CTU port number) for E1 PCMs
LSA-RC number * 28 + (27 – CTU port number) for T1 PCMs

For Ater PCMs, the connection is updated in the lsaPcmList parameter of LSA-RC
object at OMC-R.

LSA-RC number

BSC 3000 TCU 3000


Slot LSA-RC Slot LSA-RC
Number Number Number number
5 0 5 0
103 1 103 1
109 2 109 2
112 3 112 3
13 4
2 5

3-47
Service Area Interface
3 - BSC
SAI

1 2 3
LSA LSA LSA

ATM RM
ATM RM
RC RC RC
CTU#0

CTU#1
Interface
Node
CTU#2 5 0 4
LSA LSA LSA

CEM 0
CEM 1
8k RM
8k RM
RC RC RC
CTU#3

CTU#4

CTU#5

411-1075A-001.1603
3-48 November, 2006
FOR TRAINING PURPOSES ONLY

3-48
Service Area Interface
4 - TCU
SAI 1 2 3
LSA LSA LSA
RC RC RC
CTU#0
Upper
Transcoding
CTU#1 Node
0 CC
LSA
CTU#2 RC
EE
MM
CTU#3 01

CTU#4 1 2 3
LSA LSA LSA
RC RC RC
CTU#5

CTU#6 Lower
Transcoding
0 CC Node
LSA
RC E
CTU#7 E
MM
01

411-1075A-001.1603
3-49 November, 2006
FOR TRAINING PURPOSES ONLY

3-49
Student notes:

3-50
Data Flow Exercises

Section 4

nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY

4-1
Objectives

After this module of instruction, you will be able to draw the


data paths inside the BSC 3000 and TCU 3000 for the
following:
> Traffic (Circuit Switch and Packet Switch)
> GSM Signaling
> Call Process Signaling
> OA&M

411-1075A-001.1603
4-2 November, 2006
FOR TRAINING PURPOSES ONLY

4-2
Contents

> Internal BSC Dialogues


> Circuit Switch/Packet Switch Path
> GSM Signaling Path
> BSC 3000 and TCU 3000 Dialogue

411-1075A-001.1603
4-3 November, 2006
FOR TRAINING PURPOSES ONLY

4-3
Internal BSC Dialogues
Control Node
OAM OMU
OAM OMU MMS TMU TMU TMU
MMS
ATM SW ATM SW
Traffic Traffic Traffic
Management Management Management

Interface Node
Switching Unit
ATM RM ATM RM
CEM
ATM/PCM ATM/PCM
Interface 64 kb/s Interface

LSARC LSARC

LSARC LSARC
8K RM
To To
PCM PCM
BTSs Controller TCUs
Controller 8 kb/s

411-1075A-001.1603
4-4 November, 2006
FOR TRAINING PURPOSES ONLY

On the block diagram of the Control and Interface Nodes, trace the path for
internal messaging:
• between TMUs,
• between the OMU and the CEM.

4-4
Circuit Switch/Packet Switch Path
TMU TMU

Traffic Traffic
Management Management
OAM OMU
PCU
ATM SW ATM SW

Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface

LSARC LSARC LSARC LSARC


MSC
8K RM PCM PCM PCM
PCM TRM
Controller Controller Controller Controller
BTS Vocoders

411-1075A-001.1603
4-5 November, 2006
FOR TRAINING PURPOSES ONLY

On the block diagram of the BSC and TCU, trace the path for circuit switch traffic
and packet switch communication.

4-5
GSM Signaling Path
TMU TMU

Traffic Traffic
Management Management
OAM OMU

ATM SW ATM SW

Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface

LSARC LSARC LSARC LSARC


MSC
PCM 8K RM PCM PCM TRM PCM
Controller Controller Controller Controller
BTS Vocoders

411-1075A-001.1603
4-6 November, 2006
FOR TRAINING PURPOSES ONLY

On the block diagram of the BSC and TCU, trace the path for BTS/LAPD and
MSC/SS7 signaling.

4-6
BSC 3000 and TCU 3000 Dialogue
TMU TMU

Traffic Traffic
Management Management
OAM OMU

ATM SW ATM SW

Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface

LSARC LSARC LSARC LSARC


MSC
PCM 8K RM PCM PCM PCM
TRM
Controller Controller Controller Controller
BTS Vocoders

411-1075A-001.1603
4-7 November, 2006
FOR TRAINING PURPOSES ONLY

On the block diagram of the BSC and TCU, trace the path for Call Processing
dialog and Operation and Maintenance between the BSC and the TCU.

4-7
Student notes:

4-8
BSC 3000 and TCU 3000
Operation

Section 5

nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY

5-1
Objectives

After this module of instruction, you will be able to


> Indicate the means which are used to operate and maintain a
BSC 3000 and a TCU 3000
• OMC-R
• TML (Local Maintenance Terminal)
• RACE (Remote ACcEss Equipment)
> Briefly describe the main operations
• Software download
• Start up and Shut down

411-1075A-001.1603
5-2 November, 2006
FOR TRAINING PURPOSES ONLY

5-2
Contents

> Operation and Maintenance


> Object Model at the OMC-R
> Software Architecture
> Software Downloading
> Startup

411-1075A-001.1603
5-3 November, 2006
FOR TRAINING PURPOSES ONLY

5-3
Operation and Maintenance
Overview

Operation and Maintenance

RACE
TML

Local Remote ACcess


OMC-R Equipment
Maintenance
Terminal
411-1075A-001.1603
5-4 November, 2006
FOR TRAINING PURPOSES ONLY

The BSC 3000 takes advantage of its high processing power, to perform many
O&M tasks in parallel: for example it takes in charge the software upgrade of all its
BTSs, once it gets the full software loaded from the OMC-R.
It can download the software of up to 100 TRXs simultaneously, hence decreasing
considerably the upgrade duration or the necessary time to bring back into service
the whole BSS network after a cold restart.
The hardware and software architecture of the BSC 3000 and TCU 3000 (one-to-
one links between hardware modules, supervision software, supervision activity of
passive modules) allow precise and immediate fault detection (both hardware and
software failures).
The simplicity of the hardware architecture allows the BSC to detect very precisely
any hardware fault at a module level.
Each hardware module is replaceable unit and is hot-insertion: when it has been
detected as faulty, it can be replaced without stopping the BSC or the TCU and
the new module will be automatically configured and put into service by the BSC.

5-4
Object Model at the OMC-R
1 - OMC-R/BSC Interface

Old New

BSC

BSC

411-1075A-001.1603
5-5 November, 2006
FOR TRAINING PURPOSES ONLY

The object model will converge towards the Q.3 object model of the OMC-R; in
this way, the Q.3 mediation done in the OMC-R will become easier and more
effective.
Managed object modeling (list of objects, and their associated attributes, actions,
notifications and counters), is equivalent to the one proposed on the Q.3 interface
of the OMC-R Mediation Device.
Main benefits:
• less mediation
— average mediation rate 4% instead of 55% network vision uniformity,
• single stream OA&M:
— design cost reduction OMC-R CM,
— BSC OA&M,
• hardware management:
— clear board identification, board restart, test triggering.

5-5
Object Model at the OMC-R
2 - bsc and transcoder Objects

bsc
Automatically transcoder
triggered

pcmCircuit software Hardware


Hardware

modules
Lsa* cem trm
cn in

iem

mms omu tmu cc 8krm atm cem Lsa

boards
mms iem
* = manually updated

411-1075A-001.1603
5-6 November, 2006
FOR TRAINING PURPOSES ONLY

New hardware objects are introduced into the OMC-R BSS Q.3 object model for
each type of board or module to be managed in a BSC 3000 and TCU 3000.
These objects will be used by the different OMC-R applications (configuration,
fault, performance), exactly like the other Q.3 objects. For example, a fault related
to a hardware module will be notified directly on the corresponding hardware
object.
These hardware objects will be made visible both in the “internal” Q.3 interface
(MD/OMC-R) and in the “external” one (MD/NMS).
The main objects are triggered automatically: bsc3GEqpt, cn and in.
The LSA-RC shall be created manually by the operator at a specific position in the
shelf (configuration data of the LSA-RC object).
This creation results in the creation of the Resource Complex Management and
the TIM:
• the IEMs follow standard plug & play module management,
• the TIM is always in the central position (x position),
• the two redundant IEM modules always surround the TIM (x-1 and x+1
position) at the OMC-R level.
All board objects are created automatically.

5-6
Object Model at the OMC-R
3 - BSC 3000 Control

BSC 3000

411-1075A-001.1603
5-7 November, 2006
FOR TRAINING PURPOSES ONLY

All hardware modules of the BSC 3000 & TCU 3000 are modeled and managed
as logical objects. This allows both the BSC 3000 and the OMC-R to provide the
operator with precise information and services on each individual hardware
module:
• Board representation on the OMC-R GUI: The physical BSC board layout will
be represented in the OMC-R GUI.
• Fault representation: A hardware problem can be tracked thanks to this new
representation which allows faulty boards to be highlighted on the OMC-R GUI.
• Private Data collection: Dynamic data can be collected per boards to give the
operator specific information related to the boards/modules (Localization,
Firmware identification, Inventory information).
• Maintenance actions: Actions can be performed for some boards/modules in
order to prevent or to correct hardware problems (RESET BOARDS) or to
trigger tests from the OMC-R.
• Performance measurement: New localization will be performed on the Q.3 and
BSC/OMC interface which will significantly reduce the number of counters
defined in the Q.3 interface. Thus, access to the observation report will be
simplified.

5-7
Object Model at the OMC-R
4 - TCU 3000 Control

TCU 3000

411-1075A-001.1603
5-8 November, 2006
FOR TRAINING PURPOSES ONLY

Graphical view of a TCU 3000, with easy fault localization, due to module
representation.

5-8
Software Architecture
1 - Software Layers

Application and Services Layer


(GSM/GPRS Call Processing, OA&M, BSC and TCU services,
ADMinistration, Abis, Ater and Agprs access)

Platform Layer
(Supervision, Startup, Load Balancing, Messaging)

Base OS Layer
(Memory and disk access)

Hardware Abstraction Layer


(Base Support Package, OS Kernel, drivers): AIX, VxWorks, VRTX

Hardware/Firmware

411-1075A-001.1603
5-9 November, 2006
FOR TRAINING PURPOSES ONLY

A basic software package provides common services to all software units:


• The Application and Services layer (ASL), is a set of functional entities
providing the BSC/TCU services as GSM components: Call Processing,
OA&M, Abis, Ater and Agprs access.
• The Platform layer is responsible for management of the platform: Supervision,
Startup, Load Balancing, Messaging.
• The Base OS layer, is composed of the standard OS (AIX, VxWorks, VRTX)
and off-the-shelf software running on the OS.
• The Hardware Abstraction Layer is responsible for making the upper layers
independent with respect to the hardware; it is composed of a Base Support
Package (flash), the OS kernel and the drivers required to manage the
hardware.

5-9
Software Architecture
2 - BSC Software Architecture
Control
OMU PCUSN TMU PCUSN TMU
OA&M OA&M Node
GSM TCU BTS GSM TCU BTS GSM
OA&M OA&M OA&M Call P. OA&M OA&M Call P.
Platform Platform Platform
Base OS AIX Base OS Base OS
VxWorks VxWorks

Interface
CEM
Node
IN Switch
OA&M Mgt.
SAPI/Base
Base OS VRTX

LSA RC 8K RM ATM RM
RM Switch RM ATM RM
OA&M Mgt. OA&M Mgt. OA&M
Base Base SAPI/Base
Base OS VRTX Base OS VRTX Base OS VRTX

411-1075A-001.1603
5-10 November, 2006
FOR TRAINING PURPOSES ONLY

The BSC 3000 Control Node software is divided in two main areas:
• A "TMN front-end" area composed of the two OMUs: the software is composed of
centralized functions (OMC-R interface management, Data Base management, etc.)
possibly duplicated in passive mode on the mate OMU.
• A "Traffic Management" area composed of the TMUs: the architecture is based on a
scalability policy. This means that a BSC can be equipped with only one TMU with
extension capability when more TMUs are provisioned (total = up to 14 TMUs). This
implies a distributed software architecture to share the processing load over all the
TMUs. In this way, the "GSM application" and the "Platform" layers are designed as
distributed software.
The distribution criteria are closely linked to the managed objects:
• the software in relation with the TCU should preferably be distributed per TCU
equipment,
• the software in relation with the PCUSN should preferably be distributed per PCUSN
equipment,
• the software in relation with the BTS objects (BCF, TRX, TDMA, etc.) should
preferably be distributed per BTS site,
• the software in relation with the MSC should be distributed only on software
architecture criteria. In fact, the A interface objects are viewed as unmarked resources
from the MSC point of view.
To achieve these goals, the "Load Balancing" and "Fault Tolerance" services provide
respectively the capability to distribute the application entities over all the provisioned
TMUs and the capability to protect the system from (or at least to reduce) the impact of
failures.

5-10
Software Downloading
1 - BSC Downloading from the OMC-R
OMC-R
BSC http://jjj.kk.lll... html

TMU ATM SW

TML

Traffic
Management

OAM OMU

MMS
(Disk) EFT Downloading

FTAM

FTAM = File Transfer Access Management


EFT = Set of files to be down loaded (Ensemble de Fichiers Transférables)
411-1075A-001.1603
5-11 November, 2006
FOR TRAINING PURPOSES ONLY

The BSS software (BSC, TCU and BTS) is downloaded into the BSC from the
OMC-R. For each version and edition, the complete BSS software is delivered on
a CDROM. This volume can be used at the OMC or TML level.
It is compressed and divided into several files in order to download only the
modified files between two versions and to reduce as much as possible the
downloading duration.
The BSC 3000 stores two versions of the BSS software. The new version will be
downloaded in background without impacting BSC service.
Both BSS software and BSC OS can be downloaded in background, or installed
locally from the TML.
There is no PROM memory on the BSC 3000 & TCU 3000 hardware module, with
the exception of the ATM SW module (ATM switch).
All firmware is in flash EPROM and can be modified and downloaded remotely by
the system.
The complete BSS software (BSC, TCU and BTS) is downloaded from the OMC-R
to the BSC via FTAM.
The OMC and BSC 3000 are connected through Ethernet and IP protocols.
The throughput is up to 10/100 Mbit/s (Ethernet standard) if the OMC-R is locally
connected to the BSC.
When the BSC is remote, a minimum throughput of 128 kbps is necessary for the
efficiency of OMC-BSC communication.

5-11
Software Downloading
2 - BTS and TCU Downloading

BSC

IN ATM RM
BSC Disk 8K-RM
LSA-RC

CN
ATM SW TCU
TMU LSA-RC
TRM
BSS Active
Software OMU Passive
OMU
OA&M

BTS
BCF
TRX
411-1075A-001.1603
5-12 November, 2006
FOR TRAINING PURPOSES ONLY

BTS downloading
The BSC can download ten BTSs simultaneously per TMU.
With ten “active” TMUs, 100 BTSs can be downloaded simultaneously.
BSC 3000 support BTS Background Downloading since V16.

TCU downloading
TCU 3000 software is downloaded by the BSC 3000. It is compressed and divided
into several files, in order to download only the modified files between two
versions and to reduce the downloading duration as much as possible.
The BSC 3000 stores two versions of TCU software.
The new version can be downloaded as a background task, without impacting
TCU service.
The TCU software can also be installed locally from the TML.
A set of LAPD connections is used for TCU management in normal operation.
To download the TCU, supplementary LAPD connections must be setup.
These connections pre-empt (or wait for) time-slots used for communications.
A minimum of four LAPD channels can be managed per LSA module.
Download of a set of files (size of about 20 Mbytes per TCU) lasts:
• with four LAPDs: about 20 minutes (requires a minimum of 2 LSAs),
• with eight LAPDs: about 10 minutes (requires a minimum of 3 LSAs).

5-12
Startup
1 - BSC or TCU Cold Startup (MIB not built)

BSC
Control Node
Module

Board Dead Office Recovery

Module Recovery

Board Recovery

The Hardware Startup Progress

This scenario applies to the C-Node modules:


• OMU
• TMU
• ATM SW

411-1075A-001.1603
5-13 November, 2006
FOR TRAINING PURPOSES ONLY

The overall startup sequence describes how the BSC goes from its initial power-
up state, with no software running, to a fully operational state where the
applications are running and providing GSM service.
This type of startup is called dead office recovery and first needs the entire Control
Node startup sequence to be performed.
The operator builds the network at OMC-R level and creates the BSC logical
object.
As soon as the OMC-R/BSC link is established, the BSC sends a notification
indicating that a MIB build is requested.
Upon receipt of this notification, the OMC-R triggers the MIB build phase:
• The MIB (Management Information Base) is built on the active OMU.
• The “Build BDA N+1” upgrade feature is provided on the BSC 3000, as in a
BSC 2G.
• This phase ends with the creation of the MIB logical objects followed by the
reception of a report build message.

5-13
Startup
2 - Board Startup: General Behavior

Local_OA&M Fault Tolerant


Boot
(Software Manager) Local Agent

Non FT Fault
Base OS Applications
and Tolerant
FT creators Applications

411-1075A-001.1603
5-14 November, 2006
FOR TRAINING PURPOSES ONLY

A module is said to be operational when all of its boards are operational.


For each board, the startup sequence consists of three ordered steps:
• boot sequence,
• platform initialization,
• application initialization.
Application initialization covers both the creation and initialization of the GSM BSC
applications, this phase is managed in accordance with the BSC configuration and
available resources.
Some boards are able to start autonomously, booting from non-volatile storage,
whereas others must wait as they require the services of another board when
operational.
The Control Node is operational once application initialization has completed
successfully and the BSC is operational when the Control and Interface Nodes are
operational.

5-14
Startup
3 - BSC or TCU Hot Startup (MIB built)

• Boards
– Active OMU_SBC
– Passive OMU_SBC
– OMU_TM / TMU_TM
– TMU_SBC
– TMU_PMC
– ATM SW

• Modules
– OMU
– TMU
– ATM SW

• C-Node Startup
• I-Node Startup
• T-Node Startup

411-1075A-001.1603
5-15 November, 2006
FOR TRAINING PURPOSES ONLY

Since the MIB is already built, we only have to check the hardware configuration
consistency.
We must check that modules have not been introduced or removed when the BSC
or the TCU was previously switched off.
The BSC and TCU will have the same behavior as for a cold startup.
The consistency between the new and the previous hardware configuration is
checked out at the OMC-R level.
Three cases may happen:
• A module has been extracted: the corresponding object is deleted on the MMI
and in the MIB, and an alarm on the father object indicates the suppression.
• A module has been plugged into a previously-free slot: the corresponding
object is automatically created on the MMI and in the MIB, and an alarm on the
father object indicates the creation.
• A module has been replaced by another one:
— The object corresponding to the replaced module is deleted on the MMI
and in the MIB.
— The object corresponding to the newly inserted module is created on the
MMI and in the MIB.
— Alarms on the father object indicate the suppression and the creation.

5-15
Student notes:

5-16
BSC 3000 and TCU 3000
Maintenance and Enhanced
Exploitability
Section 6

nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY

6-1
Objectives

> After this module of instruction, you will be able to


understand the main benefits of BSC and TCU 3000
architecture for:
• Fault Tolerance
• Load Balancing
• Overload
• Software Upgrade
• Hot insertion/extraction (Plug and Play)
• Fault Management
• Remote Access Equipment
• Local Maintenance Terminal

411-1075A-001.1603
6-2 November, 2006
FOR TRAINING PURPOSES ONLY

6-2
Contents

> New Exploitability Principles


> Fault Tolerance
> Load Balancing
> Overload
> Fault Management
> Software Upgrade
> Upgrade and Build On Line Performances Improvements
> Software Upgrade
> Hot Insertion/Extraction
> Fault Management
> Remote ACcess Equipment RACE
> Local Maintenance Terminal TML

411-1075A-001.1603
6-3 November, 2006
FOR TRAINING PURPOSES ONLY

6-3
New Exploitability Principles
1 - Redundancy

BSC Defense

TMU TMU
TMU
TMU +
OMU A OMU B ATM SW ATM SW
A B

Active-active N+P redundancy


Hot standby
redundancy Automatic reconfiguration

411-1075A-001.1603
6-4 November, 2006
FOR TRAINING PURPOSES ONLY

The BSC 3000 and TCU 3000 provide carrier-grade availability. All hardware
modules are totally redundant, including PCM interface modules.
But unlike the current BSC 12000, total duplication of all critical BSC hardware is
not required, and a board failure does not entail a switch over to a whole set of
passive boards.
In the BSC 3000 and TCU 3000, the modules work according to one of the
following three modes:
• in hot stand-by (active/passive) mode: OMU, CEM, IEM (LSA-RC module); a
single faulty board has no impact on the BSC or TCU and multiple faults also
have no impact, providing that one module (or IEM board) works in each pair,
8K-RM (SRT),
• in parallel (both modules are simultaneously active): ATM SW (ATM switch) +
ATM RM, shared MMS + private MMS,
• in N+P mode: TMU, TRM, the modules work in load sharing, processing both
active and passive processes, and P failures will preserve the nominal
capacity.
The Fault Tolerance algorithm implemented in the BSC Control Node allows fast
fault recovery, by reconfiguring the software activity on working modules, without
impacting service.

6-4
New Exploitability Principles
2 - Cell Group Concept

• BSC 2G
— up to 2 CPU-BIFP boards (CPUE) dedicated to the Call Processing
— cellGroup = collection of BTS managed on the same board

2 cellGroup

• BSC 3000
— up to 14 TMU modules dedicated to the Call Processing
— cellGroup = collection of BTS sites

96 cellGroup

411-1075A-001.1603
6-5 November, 2006
FOR TRAINING PURPOSES ONLY

To manage the BTS sites a new concept is introduced with the BSC 3000: the Cell
Group.
Each site (and all the cells and TRXs belonging to this site) is held by a Cell
Group.
A Cell Group (called CG) can hold several sites.
The CG entity is instantiated into an active and a passive instance, which are
located on different TMUs.
The CG is in charge of all the Call processing related to the BTSs (Supervision of
the BTS, Call processing of all the communications in these cells).
The distribution of BTS sites per Cellgroup is an internal algorithm which can be
only partially controlled by the operator and thus can be configured either:
• automatically and statically by the ADM application,
• by the operator from the OMC through an optional parameter (Number of
estimated TRX) transmitted at the site creation.
Each Cellgroup is able to manage up to 300 Erlangs.
Each TMU module is able to manage:
• an average of 300 Erlangs,
• up to 100 TRXs,
• up to 16 Cellgroups (8 actives and 8 passives).

6-5
New Exploitability Principles
3 - Cell Group Management

BSC 3000

Load New Site


Balancing

411-1075A-001.1603
6-6 November, 2006
FOR TRAINING PURPOSES ONLY

The Cell Groups are determined at boot time by the Load Balancing function,
according to data associated with the cells:
• when a BTS is added to the BSC, it is added to an old or a new Cellgroup
thanks to the same algorithm,
• when a cell or a TRX is added to a BTS, the corresponding Cellgroup has
more load.
The redistribution of the sites into Cell Groups is a complex task, which is normally
performed by the BSC by respecting the CG dimensioning rules and CG capacity
objectives so defined:
• 54 CG per BSC,
• 10 sites maximum per CG,
• 18 CG per TMU,
• 75 TRXs maximum per CG,
• Maximum of 16 TRX per Cell and 48 TRX per site ( note linked to CG
allocation but maximum site size in V15.1)
Due to the software links complexity, a site must be placed in a CG by a BSC at
its creation, and cannot be moved to another CG after that. The only way to move
a site from one CG to another one is to delete it and then to re-create it.
Another possibility is to perform an on-line build (with complete service loss of the
whole BSC for a few minutes).

6-6
New Exploitability Principles
4 - Estimated Site Load Parameter

• A table predefines the cell Erlang load from 1 to 16 TRX


(by default Erlang B law, 2% blocking rate)

• estimatedSiteLoad (Class 3, object: btsSiteManager):


The customer can modify the values (mErlang) in the
table and set the estimated load in a cell.

Range: [0, 1100] Erlang

411-1075A-001.1603
6-7 November, 2006
FOR TRAINING PURPOSES ONLY

The BSC 3000 Load Balancing feature uses a table that predefines a cell Erlang
load from 1 to 16 TRX: ERLANG_PER_N_TRX_CELL BSC Data Config
The values in milliErlang can be modified by the customer without service
interruption (class 3).
By default, this table is filled with the Erlang B law results (2% blocking rate).
In V15.1, a possibility to set the estimated Erlang load of the site is offered with
the use of the parameter estimatedSiteLoad, which is a class 3 parameter with
object the btsSiteManager.
This parameter is used at the site creation, to define the Erlang consumption of
the new Cell Group, by setting the Erlang consumption to a different value from
the one defined by the ERLAN_PER_N_TRX_CELL table.

6-7
New Exploitability Principles
5 - Fault Tolerance, Load Balancing and Overload

>> Two
Two kinds
kinds of
of software:
software:
Î
Î Fault
FaultTolerance
Toleranceentities
entities"launched"
"launched"by
bythe
theFT
FTapplication
applicationand
andsupervised
supervisedby
byFT
FT
Î
Î non
nonFTFTentities
entities

411-1075A-001.1603
6-8 November, 2006
FOR TRAINING PURPOSES ONLY

GSM applications may be either Fault Tolerant (FT) or non Fault Tolerant (non
FT).
A Fault Tolerant application is an application that is replicated.
Load Balancing only applies to Fault Tolerant applications, it relies on the following
FT primitives to balance FT applications between TMUs:
• CREATE, to create a passive entity,
• FLUSH, to synchronize a passive entity on an active one,
• SWACT, to switch activity from an active entity to a passive one,
• KILL, to destroy a passive or an active entity.

6-8
New Exploitability Principles
6 - BSC 3000 Support BSS Based Solution

ƒ BSC and SMLC directly exchange BSSMAP-LE and


BSSLAP messages over the Lb logical interface
ƒ Support the four location methods

Location
Location
SMLC
SMLC Applications
Applications

Lb

Um A
BSC MSC
MSC
BTS
BTS BSC GMLC
GMLC
Abis VLR
VLR Lg

MS Lh
HLR
HLR

411-1075A-001.1603
6-9 November, 2006
FOR TRAINING PURPOSES ONLY

In the new BSS architecture, Nortel follows the 3GPP specification concerning the
Lb interface.
The Lb interface is used only for LCS application and relies on SS7.
There are two SS7 interfaces: A and Lb. The BSC has to manage the dialog with
multi distant point codes (SMLC and MSC). This requires having a SCCP and a
MTP3 layer multi SSN and multi DPC.
Each interface (A and Lb) relies on one distinct physical route from the BSC.
As the SMLC, the MSC and the BSC are part of the same SS7 network, the set of
sccp parameters should be identical for Lb or A interface.

6-9
Fault Tolerance
1 - Fault Tolerance Software

Fault Tolerance
Software

Module #1 Module #2

Active Current context update Passive


Instance
Instance

Module #1 Module #2
Failure
SWACT Active
Instance

411-1075A-001.1603
6-10 November, 2006
FOR TRAINING PURPOSES ONLY

A Fault Tolerant application is an application which can survive a hardware fault.


For the Control Node, this is done by having a single active instance on a given
module (TMU or OMU) and having one (or more) replica instances called passive,
located on a different module.
The active instance of the FT application runs the application code, whereas the
passive instance does not.
The passive instance(s) of the FT application are simply kept “up-to-date” with the
current context of the active instance.
Therefore, if the hardware with an FT application is running and fails, the passive
instance can take over and continue to run the application without any break in
service.
The previous passive instance becomes the new active instance.
The process of changing a passive instance to an active instance or vice versa is
called a SWitch of ACTivity or SWACT.

6-10
Fault Tolerance
2 - Example: Swact on TMU Failure
>>The
The GSM
GSM Core
Core Process
Process is
is aa set
set of
of FT
FT applications
applications managing
managing aa set
set of
of
sites
sites (Cellgroup):
(Cellgroup):
Î
Î TMG_RAD
TMG_RADfor forradio
radioresource
resourcemanagement
management
Î
Î TMG_CNX
TMG_CNX for connectionmanagement
for connection management(setup,
(setup,release,
release,assignment,
assignment,HO)
HO)
Î
Î TMG_MES
TMG_MESfor forAAinterface
interfacemessages
messages(paging,
(paging,incoming
incomingHO)
HO)
Î
Î TMG_L1M
TMG_L1Mfor forLayer
Layer11management
management
Î
Î SPR
SPRfor
forBTS
BTSsite
sitesupervision
supervision
Î
Î SPT
SPTfor
forTCU
TCUsupervision
supervision
Î
Î TMG_RPP
TMG_RPPfor forPCUSN
PCUSNsupervision
supervision
Î
Î OBS for observations
OBS for observations

TMU#1 TMU#2 TMU#3 TMU#1 TMU#2 TMU#3

A1 P1 A1 P1 A1

P2 A2 P2 A2 P2

P3 A3 P3 A3

A1 Active Core Process P1 Passive Core Process


411-1075A-001.1603
6-11 November, 2006
FOR TRAINING PURPOSES ONLY

All processing relative to a cellgroup is executed on a single TMU. The


corresponding passive (or redundant) process is executed on another TMU.
In this example, there are three groups of process, each of which is composed for
Fault Tolerance purposes of one active process “Ai” plus one passive process “Pi”
with i as the application identifier.
The three active processes are distributed over three TMUs.
The passive processes related to one active process do not run on the same TMU
as the active process, but on another TMU.
The passive processes are directly and continuously updated by their
corresponding active processes, using internal messaging.
On failure of the TMU1, the Fault Tolerance algorithm performs a SWACT by
“electing” the passive A1 processes as “Active”.
The figure shows the new distribution of processing over the two available TMUs.
A TMU managing 300 Erlangs, processes around three HandOver per second,
thus less than one external HO per second. A maximum of 10 messages per
second (50 bytes of payload) are exchanged between two TMUs.
For connection management, each TMU exchanges up to 50 messages per
second (20 bytes of payload) with the Interface Node.
At swact time (failure) the messaging activity between OMUs and TMUs needs a
bandwidth of from 2.5 to 4 Mbytes during 2 or 3 seconds.

6-11
Load Balancing
1 - Principle

TMU#1 TMU#2 TMU#1 TMU#2

A1 P1 A1 P1

A2 P2 P2 A2

A3 P3 A3 P3

A4 P4 P4 A4

411-1075A-001.1603
6-12 November, 2006
FOR TRAINING PURPOSES ONLY

The purpose of the Load Balancing function is to distribute processing in an


optimal way between the TMUs and to use the BSC resources optimally.
This is performed by distributing the processes related to the different Cellgroups
(i.e. sets of cells belonging to the same process) “equally” over the TMUs.
The distribution of Cell groups and redundant processes is also done
automatically by the system at boot time.
Load Balancing allows a redistribution of Cell groups on the TMUs, without
disturbing the calls and is executed:
• when a TMU module fails or comes into operation (for hardware or operator
reasons),
• when Cell groups are modified (to add BTSs),
• when an imbalance of the TMU CPU load is detected by the BSC: in this case,
the load balancing can be done during non-busy hours.

6-12
Load Balancing
2 - Example: Adding a TMU

TMU#1 TMU#2

A1 P1

A2 P2

P3 A3

TMU#1 TMU#2 TMU#3

A1 P1
P2 A2

P3 A3

411-1075A-001.1603
6-13 November, 2006
FOR TRAINING PURPOSES ONLY

In the system, the processor load of each TMU depends mainly on the number of
BTSs/cells/TRXs to manage, and the related amount of traffic.
When there are modifications to a BTS configuration (addition of TRX) or to a BSC
configuration (addition of TMUs) the Load Balancing service allows redistribution
of the processing with the best use of the BSC resources.
The chart gives an example of the use of Load Balancing when a TMU is added to
the BSC.
The initial configuration of the BSC is 2 TMUs, and one more is added and
provisioned for traffic management:
• the BSC automatically computes a new distribution and applies it,
• the re-distribution is achieved without exposure time by:
— adding new passive members to the groups,
— swapping their activity,
— suppressing the useless passive members.
The Load Balancing operation is achieved by using the Fault Tolerance service.
In fact, the redistribution of the processing is obtained by “electing” active
processes with the best location distribution (best applies here to taking into
account all the parameters that specify the LB criteria).
This “election” leads to several SWACTs achieved by Fault Tolerance.

6-13
Overload
1 - Principles

P R
a
g Hand A
Locat.
i Over C =
n Update CPU Load
H
g
o ry
Mem

OMU CEM
OAM TMU ATM SW
64 kbps

Traffic
Management

Dis
k

411-1075A-001.1603
6-14 November, 2006
FOR TRAINING PURPOSES ONLY

The BSC 3000 robustness in overload conditions is ensured by a centralized overload


control mechanism, which is based on the same principles as for the overload control
implemented for BSC 12000 in BSS release V12.
Overload management is the function that allows the system to correct sporadic peaks of
load (on the system) without any loss of service on still-established communications and
still covered areas. The TMU and CEM modules that can reach the overload state are
monitored. The overload management is a dynamic and reactive process.
Each module reports its synthetic load to the OMU, which controls globally the load state
of the BSC and triggers the appropriate action according to the module in overload (TMU
or CEM) and to the level of overload.
There are four parameters to observe in order to be able to manage a correct overload:
• cpu load,
• system memory occupancy,
• messaging level: queues and delays.
In nominal mode, the main peaks of load are generated on TMUs, due to Call Processing
needs, and also due to the amount of other processes, for example, management of a
large number of BTSs, and several TCUs. This is based on the thresholds allowed per
domain of software, such as:
• GSM Call Processing,
• BTS_OAM,
• TCU_OAM,
• Platform management.
The goal is to prevent the named entities from using more resources than those allowed
by given threshold(s).

6-14
Overload
2 - TMU Mechanism

Only Traffic management operations are taken into account in this mechanism
Current communications are maintained, (except for HO incoming requests above threshold 3)
overLoad levels

All messages are filtered


Level 3 = 100%
2/3 messages are filtered
Level 2 = 90%
1/3 message is filtered
Level 1 = 80%

50%
List of messages filtered:
• Paging request
• Channel request (non emergency)
• all first Layer 3 (non emergency)
• HO request (traffic reason)
Hysteresis is applied • HO request (O&M reason)
at each threshold. • Directed retry

Time

411-1075A-001.1603
6-15 November, 2006
FOR TRAINING PURPOSES ONLY

TMU modules are relatively independent one with respect to the other in terms of
overload handling. Since a TMU module manages the traffic of a group of cells, when a
TMU module is in overload, it will filter partially the new incoming traffic requests related
to the group of cells it manages.
Three overload levels are defined for each monitored processor.
For each level, some of the new traffic requests are filtered:
• level 1 (80% of processor load): traffic reduction by around 33% by filtering one
request out of three of the following messages:
— Paging Request,
— Channel Request (not Emergency Call),
— all first Layer 3 messages (not Emergency Call),
— handover for traffic reason,
— handover for O&M reason,
— directed retry,
• level 2 (90% of processor load): traffic reduction around 66% by filtering two requests
out of three of the above messages,
• level 3 (100% of processor load): no new traffic is accepted by filtering all previous
and following messages:
— all first layer 3 messages,
— all Channel Requests,
— all handover indications,
— all handover requests.

6-15
Fault Management
1 - Impact on Service in the Control Node

What happens when a module is down:

1 No impact on traffic

ATM SW
ATM SW
2 No disturbance of traffic or slight delay
- Filler -

OMU

SIM B
OMU

TMU

TMU
TMU

TMU
TMU
TMU

TMU

3 The corresponding OMU module is down

4 Swact on the other OMU, no traffic impact


1 4 2
MMS shared
MMS private
MMS shared
MMS private

SIM A
- Filler -

TMU
TMU
TMU
TMU
TMU
TMU

TMU

- Filler -
- Filler -

1 3

411-1075A-001.1603
6-16 November, 2006
FOR TRAINING PURPOSES ONLY

Control Node behavior in the case of a faulty module.

There is no impact on the traffic in the case of:


• faulty OMU,
• faulty ATM-SW.

In the case of a faulty MMS:


• Private MMS: the corresponding OMU will be lost,
• Shared MMS: no effect on traffic or OAM.

In the case of a faulty TMU:


• loss of the communication being established and being handed over, no more
duplex mode until passive processes are reestablished,
• if there are not enough TMU modules to handle the processes, then we may
loose traffic.

6-16
Fault Management
2 - Impact on Service in the Interface Node

What happens when a module is down:


1 The connections can be sent with delay to the C-node
LSA LSA LSA
- Filler -

- Filler -

- Filler -
RC RC RC 2 No disturbance on traffic

SIM A
ATM RM
ATM RM

3 No more communication

4 The connections are switched over to the second IEM

4 3 4 1

LSA LSA LSA


- Filler -
8k RM 0
8k RM 1

RC RC RC
CEM 0
CEM 1

SIM B

1 2

411-1075A-001.1603
6-17 November, 2006
FOR TRAINING PURPOSES ONLY

Interface Node behavior in the case of a faulty module.

There is no impact on traffic in the case of:


• a faulty 8K-RM.

There will be delay in the order of connections in the case of:


• a faulty ATM-RM,
• a faulty CEM.

In the case of a faulty LSA-RC:


• short-cut of the signal on a PCM, there is no impact on signaling, in the case of
a faulty IEM,
• loss of all PCM connections on the corresponding LSA-RC, in the case of a
faulty TIM.

6-17
Fault Management
3 - Impact on Service in the Transcoder Node

What happens when a module is down:


1 The connections can be sent with delay to the I-node
S
T T T
Filler
Filler

2 No signaling disturbance, but light problems on the telephony


L SA R R L SA L SA R I can appear
MM MM
3 The connections are down
4 3 4 4 No impact on traffic, swact on other IEM

C C
E E S
T T T L SA M M T T T T T T
RRR RRRRRR I
MMM M M MM M M M
1 2

411-1075A-001.1603
6-18 November, 2006
FOR TRAINING PURPOSES ONLY

Transcoding Node behavior in the case of a faulty module.

In case of a faulty CEM:


• loss of communications being established, for the active processes on the
corresponding CEM.

In case of a faulty TRM:


• slight loss in voice quality,
• no impact on signaling.

In case of a faulty LSA-RC:


• short-cut of the signal on a PCM, but there is no impact on signaling, in the
case of a faulty IEM,
• loss of all PCM connections on the corresponding LSA-RC, in the case of a
faulty TIM.

6-18
Software Upgrade
1 - Overview

Software upgrade from version N to version N+1:


Ø Uses a Zero Downtime mechanism based on the replicated architecture
involved by Fault Tolerance and Load Balancing
Ø The operator does not have to move on site and only one person must
be able to control remotely the whole upgrading sequence from the
OMC-R.

411-1075A-001.1603
6-19 November, 2006
FOR TRAINING PURPOSES ONLY

A new version and edition of software can be downloaded remotely without any
operational impact, only modified files in the new version are downloaded.
Before any upgrade procedure, the equipment (BSC/TCU) must check its
hardware (flash memory checksum).
The execution of the upgrade is ordered by the OMC-R and controlled by the BSC
(in the OMU module), after the complete transfer of new files. Only the modules
that have modified software are downloaded again.

The first phase of the upgrade software can be made a long time before the
upgrade of a module. This allows the upgrading data to be transferred to the MIB
(Managed Information Base) located in the ”shared” disk located of the control
node. This operation is done when the BSC 3000 is working without any service
disturbance (except bandwidth reduction.)
Then, the control node sends upgrade orders to the CEM module that manages
the upgrade of the concerned module itself, without breakdown of the services that
are running.

6-19
Upgrade and Build On Line
Performances Improvements (1/5)
UPGRADE TYPE 4
CN
OMU_P OMU_A

CC1_1 CC1_2

TMU_1 TMU_2 V15


TMU_N

V15.1

IN

IEM_P IEM_A CEM_P CEM_A

IEM_A IEM_P 8K_A 8K_P

IEM_P IEM_A ATM_1 ATM_2

411-1075A-001.1603
6-20 November, 2006
FOR TRAINING PURPOSES ONLY

In the current behavior, the CN is upgraded first and then IN upgrade is triggered.
This serialization of CN/IN upgrades was chosen to prevent interoperability issues.
In particular, this serialization prevented having an IN in N+1 release that interact
with a CN in N release.
This serialization of CN and IN upgrades can be alleviated provided that
interoperability is no more an issue when CN and IN are in heterogeneous
releases: CN in N release and IN in N+1 release. The IN upgrade will be triggered
as soon as the OMUs, CC1s and the first TMU have been upgraded successfully.
New CC1 upgrade behaviour ( previously no check done on the ATM-RM
status)
• Check that both ATM_RM are enable/on-line before
• Beginning the upgrade
• Upgrading CC1
ATM-RM new behaviour
• In V15.1 release, The ATM-RM is expected not to reset when it detects a loss
of signal on the OC3 fiber.

6-20
Upgrade and Build On Line
Performances Improvements (2/5)
BUILD ONLINE
• Active OMU Application restart instead of OMU reset
• Save AIX start up and disk mounting

• New MIB Activation Protocol:


• Clear config request * to IN/TCU instead of reset

Target Downtime reduction ( Half of V14.3)

* clean-up of all resources on IN/TCU)


411-1075A-001.1603
6-21 November, 2006
FOR TRAINING PURPOSES ONLY

In V14.3, the complete restart of the BSC is triggered by upgrade control node
manager that sends a control reset request to hardware management. Upon
reception of this control node reset request, hardware management resets first the
TMUs, the CC1, the passive OMU and finally the active OMU.
Actually, there is no need to reset the active OMU; only applications need to be
restarted to load the new data from the new MIB. This saves the AIX startup
latency and shared disks mounting. The average AIX star-up latency is around 4
minutes. For that purpose, upgrade CN sends a control node restart to hardware
management that triggers a backplane control node restart.
The backplane control node restart triggers the following actions:
• Stop all applications on the active OMU
• Reboot of the OMU_TM of the active OMU
• Restart all applications on the active OMU
• Check the shared disk

The clear config req impacts the upgrade control node manager that must not
reset the IN/TCU before resetting the control node during the activation of the new
MIB.

6-21
Upgrade and Build On Line
Performances Improvements (3/5)
UPGRADE OFF LINE

• OMU application restart used to restart the CN instead of


resetting it during an Upgrade type 6 & 7
• Save AIX start up and disk mounting

• Clear config request* to TCU instead of reset

• New OMU flash upgrade protocol


• Replace CN reset by an OMU restart
• Parallelize multiple tasks

* clean-up of all ressources on TCU)

411-1075A-001.1603
6-22 November, 2006
FOR TRAINING PURPOSES ONLY

The omu application restart will be used to restart the control node instead of
resetting it during an upgrade type 6 or type 7. This OMU application restart
requires to reset the active OMU at the end of the offline upgrade making sure that
low level deliveries are loaded on the active OMU. This OMU reset must be
synchronized with load balancing and IN events.
The clear config can be leveraged during an offline upgrade to gracefully restart
the TCU instead of resetting it. For that purpose, the upgrade control node
manager will send a CLEAR_CONFIG_REQ to TCU instead of a
RESET_REQUEST.This way the TCU will be ready to be configured very shortly.
Note that the IN must still be reset since control node and IN are upgraded at the
same time.
A new OMU flash upgrade protocol has been proposed to decrease significantly
the upgrade offline downtime. This protocol relies on the OMU application restart
to shorten the latency of OMU flash upgrade. Precisely, this new protocol replaces
each control node reset by an OMU restart. Furthermore, this protocol enables
also to parallelize multiple tasks that were previously serialized.

6-22
Upgrade and Build On Line
Performances Improvements (4/5)
OMU Active Startup

2min30 OMU Passive Startup

2min 2min30
Active CP
Startup
1min30 Passive CP
Startup
3min 1min30
IN Critical Path config IN
2min 1min
TCU Clear Config Config TCU
1min 1min30
Downtime before first call: ~ 5min
Downtime before full duplex: ~ 7min

V16 BSC Start up chronogram during an offline activation


411-1075A-001.1603
6-23 November, 2006
FOR TRAINING PURPOSES ONLY

Main improvement:
• OMU passive startup is postponed at the end of the control node startup
concurrently to the startup of the passive core processes among TMUs.
• IN critical path duration decreases to two minutes (see section 4.7.1). This
enables the IN critical path latency to overlap entirely with the active OMU
startup duration. Note that the requirement differs for IN and TCU critical path
duration improvement. IN critical path must not exceed 2 minutes whereas
TCU critical path can be a little bit longer without any impact on the overall
BSC down time.
• Core processes are started-up concurrently on different TMUs.

6-23
Upgrade and Build On Line
Performances Improvements (5/5)
OMC BSC TCU
TGE backgroundTcuUpgrade (TCU e3, TC3vveeddpp.LIV, offline,…)

2034:begin
2024:cleared

Updates TCU S/W links to new version


Check On upgrade
Upgrade offline req conditions OK

ftp load to flash


Shared All Boards Flash Updated
Disk with new N+1 release
Upgrade ack with reset
RGE OK
TCU auto reset
2024:cleared
2034:END Init_dialog_req

Init_dialog_ack
PCM configuration
Fuzzy Period
2034:begin
Upgrade offline req
Startup Upgrade ack w/o reset
Upgrade 2024:cleared
2034:END

Offline upgrade TCU without having to lock the TCU


411-1075A-001.1603
6-24 November, 2006
FOR TRAINING PURPOSES ONLY

Currently, the TCU offline upgrade protocol requires to lock the TCU prior to
activate the upgrade. This TCU lock incurs a very long interruption of service
because the offline upgrade includes the TCU boards flash download via the
LAPD channels;
hence through a limited bandwidth. Recent performances measurements have
shown that the TCU software download is longer than IN software download by an
order of magnitude.

A major improvement is to trigger an offline upgrade TCU without having to lock


the TCU prior the offline upgrade activation; hence keep the TCU in service during
the software download.

6-24
Software Upgrade
2 - OMU Software
Active OMU Passive OMU
Applications (N) OMU#1 Applications (N) OMU#2 The passive OMU
is reset and boots
A1 P1 with new software
and becomes active

A2 P2

A3 P3 The previously active OMU


becomes passive
Platform (N) Platform (N) is reset and boots
Base OS (N) Base OS (N)
with the new software version

Software upgrade may only impact some parts of the software


entities:
Î base OS (AIX),
Î platform (OAM, FT, messaging, LB, …),
Î applications (supervision, performance and fault management, software downloading,
lapd and SS7 management, ....

411-1075A-001.1603
6-25 November, 2006
FOR TRAINING PURPOSES ONLY

Upgrading of the passive OMU can be separated into two phases:


• application software upgrade,
• AIX upgrade.
The master module of each Node is upgraded first: OMU (C-Node) and CEM (I-
Node or T-Node).
Application software upgrade:
• The passive OMU is reset and boots with the new software version.
• When the passive OMU has entirely recovered and correctly updated, the
OMU performs a swap and the new active OMU runs the new software
version.
• The new passive OMU is then reset to boot on the new version.
AIX upgrade
AIX is the operating system of the BSC 3000, hosted in the private disks (MMS).
The upgrade of AIX is very different if it is a new version or an update.
When a complete re-installation is required, the private disk of the OMU has to be
erased and re-written. This is done via the other OMU, which acts as boot server,
and takes between half an hour and one hour. During the installation the OMU is
not bootable and the BSC is in a phase without OMU redundancy.
AIX updates are made with “file-sets” which can be installed online. A reboot of the
OMU may be necessary. If so, it is done on the passive OMU.

6-25
Software Upgrade
3 - TMU Software: Principle
TMU#1 TMU#2 TMU#3 TMU#4 TMU#1 TMU#2 TMU#3 TMU#4

A1 P1 N P1 N A1
N Isolation
A2 P2
+
A2 P2 N
N reset
A P3 N A3 P3
N
P4 A4 P4 A4

TMU#1 TMU#2 TMU#3 TMU#4


1° TMU is isolated
2° TMU reset and boots with new A1 P1 N
software N
3° TMU joins the group to retrieve the A2 P2
processes it hosted previously N+1
A3 P3
N
P4 A4

411-1075A-001.1603
6-26 November, 2006
FOR TRAINING PURPOSES ONLY

The TMU upgrade is the most complex, because call processing is managed by
TMUs in real time during the upgrade, these modules are in N+P “load sharing”
redundancy and furthermore, the upgrade is performed without any interruption of
service.
The advantage of redundancy during a software upgrade is to manage “N” and
“N+1” versions together during transient states of the system with minimal risk.
The two software versions, N and N+1 are assumed to be fully compatible.
The upgrade is always executed concurrently with GSM traffic management
remaining active.
TMU modules are upgraded one by one as follows:
• One TMU is relieved of all its processes so that active processes and passive
processes are supported entirely by the other TMUs.
• When isolated, the TMU resets and boots on the new software version: the
TMU flash is rewritten at this time.
• Once recovered, the TMU (N+1 version) joins the TMU group (N version) to
retrieve the applicative processes it hosted previously to the upgrade.

6-26
Software Upgrade
4 - TMU Software: Upgrade Wave
TMU#1 TMU#2 TMU#3 TMU#4 TMU#1 TMU#2 TMU#3 TMU#4

A1 P1 N A1 P1 N+1
N N
A2 P2 A2 P2
N+1 N+1
A3 P3 A3 P3
N N+1
P4 A4 P4 A4

Thanks to N+P replication, no downtime should occur during this upgrade wave

TMU#1 TMU#2 TMU#3 TMU#4 TMU#1 TMU#2 TMU#3 TMU#4

A1 P1 N A1 P1 N+1
N N+1
A2 P2 A2 P2
N+1 N+1
A3 P3 A3 P3
N+1 N+1
P4 A4 P4 A4

411-1075A-001.1603
6-27 November, 2006
FOR TRAINING PURPOSES ONLY

To maintain the traffic management activity during the upgrade, the upgrade is
performed by “waves”, by one set of boards at a time:
• First, all the traffic is transferred to TMUs that are in version N.
• The other TMUs are isolated (the size of the wave is a configuration
parameter).
• The isolated TMUs are upgraded (software downloading, initialization, etc.).
• To avoid service interruption, passive members are first created on the newly
upgraded boards.
• Finally, activity is transferred to them.
• During the period of coexistence of the two releases, some restrictions may
apply depending on the compatibility level between both versions: no handover
between N and N+1 area, etc..

6-27
Software Upgrade
5 - CEM or RM Software Upgrade (ATM-RM, 8K-RM,
IEM) Active CEM or RM Passive CEM or RM
Applications (N) Applications (N)

A1 P1

A2 P2

A3 P3

Platform (N) Platform (N)


Base OS (N) Base OS (N)

> Software upgrade may impact only some parts of software


entities:
Î base OS
Î platform (OAM, messaging, …)
Î applications (software downloading, etc.)

411-1075A-001.1603
6-28 November, 2006
FOR TRAINING PURPOSES ONLY

For the CEM modules and the RMs, with the following redundancy factor: 1+1, the
upgrading of this protection group is done as follows:
• loading of the software packages is running inside the passive RM or the
passive CEM module,
• a SWACT is running between:
— the passive CEM module and the active CEM module,
— the active RM and the passive RM.

6-28
Software Upgrade
6 - TRM Software Upgrade
TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10 STEP 1
• Soft blocking on
N N N N N N N N N N the first module
• Load sharing on
TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10 each other TRM
modules
N+1 N N N N N N N N N • Load N+1 release
on TRM1

TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10 STEP 2
N+1 N N N N N N N N N • Soft blocking
on the second
module
TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10
• Load sharing
N+1 N+1 N N N N N N N N
on each other
modules

STEP 3 to Final Step


TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10
Final Step: The new
software is downloading
N+1 N+1 N+1 N+1 N+1 N+1 N+1 N+1 N+1 N+1
on each TRM

411-1075A-001.1603
6-29 November, 2006
FOR TRAINING PURPOSES ONLY

For the TRM with the following redundancy factor: N+P (P=1), the upgrading of
the protection group is done as follows:
• a “soft blocking” is sent to the TRM concerned,
• the new communications are distributed to another TRM,
• when the communications in progress inside the TRM concerned are
accomplished, then the software upgrading is done.

6-29
Hot Insertion/Extraction
1 - Overview

Automatic or half-automatic
Hot module insertion or extraction
plug and play
without service interruption
configuration capability

Easy hardware maintenance or extension


by simply extracting or plugging modules

411-1075A-001.1603
6-30 November, 2006
FOR TRAINING PURPOSES ONLY

The hardware modules of the BSC 3000 have a hot insertion and extraction
capability. This means that a hardware module can be replaced or added in the
equipment without shutting down the machine even partly and without any impact
on service.
Furthermore, the BSC 3000 offers “plug & play” (or auto discovery) capability both
for equipment startup and for module hot insertion.
The modules are automatically detected, started and configured allowing an easy
and efficient maintenance of BSC 3000 and TCU 3000 hardware equipment.
The BSC 3000 and TCU 3000 report information about their hardware
configuration automatically to the OMC-R.
Because of this architecture, the “hot plug & play” feature does not apply to the
LSA-RC module: TIM and RCM boards are not involved.
Module extraction
When a module is extracted, a notification is sent to the OMC-R: this notification is
a state change to “disabled/{notInstalled}” of the object that was previously in the
slot. On reception of this state change, the OMC-R deletes the corresponding
logical object and removes it from the HMI and the MIB.
An alarm is generated at OMC-R level on the father object to indicate that a
module has been removed.
Hot extraction of the module can be performed without any tools, but the OMU
and MMS modules requires an operator action on the frontface pushbutton, using
a pencil.

6-30
Hot Insertion/Extraction
2 - Hot Insertion Procedure

Local or External Q.3 MD-R MOD BSC/TCU 3000


Manager
3° MOD and Q.3 2° HardwareInsertion 1° Module
logical event insertion
identifiers detection
allocation
4° Instance
Storage
7° Board 6° ObjectCreation
appearance 5° Spontaneous
(Local Notification creation on Q.3
Manager) 7° Creation 8° Module
TGE and RGE event
reporting
begin

MIB
BDE
(BDA)
TGE = Transaction Globale d’Exploitation
RGE = Réponse Globale d’Exploitation

411-1075A-001.1603
6-31 November, 2006
FOR TRAINING PURPOSES ONLY

Module insertion
C-Node (Control Node) and I-Node (Interface Node) objects are automatically
created when the user creates the BSC 3000 object on the OMC-R.
The Platform sends notifications indicating the hardware configuration. This
hardware configuration is detected on the corresponding platform object
(C-Node, I-Node, LSA or T-Node).
This information is stored on the MMS disk and sent to the OMC-R. It can be read
on the MMS disk, even when a module is out of service.
The information is also stored at OMC-R level and can be displayed upon operator
request.
Module hot insertion may be described as follows:
• module insertion by the craftsperson,
• hardware detection and BIST,
• front panel LED state depending on BIST results,
• verification by the craftsperson that the LED state is correct,
• hardware detection notification including BIST results sent towards the
OMC-R,
• the module is created at the OMC-R and is displayed on the HMI.

6-31
Fault Management
1 - Remote Maintenance Capability

> Reset/Switch-over Remote


tests:
• on-line tests
• off-line tests

> Technical status:


• Hardware references
• BIST (Built-In Self Test results)
• Hardware faults...

Easy maintenance platform:


• The network operator can remotely trigger module reset or switch-over.
• He can also trigger on line or off-line test from its network management center.
• The network operator has a permanent technical status at network management center level.

411-1075A-001.1603
6-32 November, 2006
FOR TRAINING PURPOSES ONLY

BSC 3000 & TCU 3000 hardware management from the OMC-R is based on the
hardware detection capability of the new generation platform. All faults concerning the
components of an object are reported to the OMC-R.
The FM application is hierarchically structured: the processor, module, I-Node, C-Node,
BSC and each level of the OA&M function are able to detect, analyze, filter and react to a
fault if their level is able and authorized to manage this fault, because of the potential
system complexity of the fault.
For example, an ATM fault detected between the ATM SW and TMU modules can not be
corrected directly by the TMU/OA&M application, but only by the Control Node/OA&M
application located on the OMU module.
The OMU is the FM master module for the Control Node and the BSC 3000, it stores the
fault events on circular files and sends them to the OMC-R.
The CEM is the FM master module for the Interface Node. Two kinds of information are
sent by the Interface Node in the case of equipment failure:
• the state changes treated by the I-Node/OA&M application,
• the details of the fault, forwarded to the OMC-R for maintenance purpose.
There are two levels of fault:
• faults that do not impact the availability of the object: failure of an IEM (LSA-RC), a
CEM, an ATM or an 8K-RM,
• faults that make the object unavailable: failure of both cards or modules, failure on a
TIM of an LSA-RC.
In the case of hardware failure, a craftsperson needs to repair the failure by changing the
faulty module.

6-32
Fault Management
2 - On Board Inventory Information

Fast detection

ATM SW
ATM SW
Fix without any service

SIM B
OMU

OMU
- Filler -
TMU

TMU
TMU

TMU
TMU
TMU
TMU
interruption
Reliable diagnostic 1 22 3 4 5 6 7 8 9 10 11 12 13 14 15

Report with accurate

- Filler -

- Filler -
- Filler -
identification

SIM A
MMS
MMS

MMS
MMS

TMU
TMU

TMU
TMU

TMU
TMU
TMU
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

TMU board
version xx
shelf 2
slot 3
serial xxxxx

411-1075A-001.1603
6-33 November, 2006
FOR TRAINING PURPOSES ONLY

The faults events sent to the OMC-R contain all the necessary information for
supervision and maintenance: type of fault, criticality, service impact, impacted
hardware.
Hardware failures are notified directly on the related hardware module, so that the
OMC-R can display the failed equipment precisely to the operator.
On board inventory information for the equipment (BSC 3000 and TCU 3000):
• Physical location,
• Site,
• Unit,
• Floor,
• Row Position,
• Bay Identifier.
For the FRUs (Field Replaceable Units):
• Serial number (Corporate Standard 5014.00 compliant),
• Module Name (generic name of the module family),
• Module type (PEC code = product engineering code),
• Hardware release,
• Hardware position (shelf, slot).

6-33
Fault Management
3 - LED of all Modules in BSC 3000 and TCU 3000
(except MMS modules)

Red Green Meaning

Not powered
BIST running

Module is active

Module is passive

Alarm state

Path finding (*)

(*): indicates that a board must be flagged for replacement


or for any other reason, in order to avoid errors by the maintenance staff

411-1075A-001.1603
6-34 November, 2006
FOR TRAINING PURPOSES ONLY

All BSC 3000 & TCU 3000 modules have the same two LEDs on the upper part of
the front face of each module to facilitate on-site maintenance and to reduce the
risk of human error.
This table gives the description, combinations and states of the red LED and the
green LED for each module (except the MMS module) inside the BSC 3000
cabinet and the TCU 3000 cabinet.

6-34
Fault Management
4 - LED of MMS Modules in the BSC 3000

Red Green Meaning

The MMS module is not powered

The MMS module is not managed or not created


The MMS module is locked. It is not operational
(disk is updating or stopping)

The disk is operational and updated (unlocked)

Alarm state

Path finding: the MMS module can be removed

Read/Write operation on the disk

411-1075A-001.1603
6-35 November, 2006
FOR TRAINING PURPOSES ONLY

This table gives the description, combinations and states of the red LED and the
green LED for the MMS modules in the BSC 3000 cabinet.
The round yellow led is blinking on the disk to indicate, read/write operation.

6-35
Remote ACcess Equipment RACE
1 - HTTP/RACE Server on an OMC-R WorkStation
OMC-R
Site RACE Terminals ETHERNET

Server Server
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

OMC-R
Server Modem Modem Modem

IP Network
PSTN
Intranet/ BSC Site
Internet BSC
3000 TML RACE
Modem

Firewalls
Modem
BTS Site

RACE
TML BTS TML
S8000 BTS Modem RACE
S4000/
S12000 S2000E
S18000

411-1075A-001.1603
6-36 November, 2006
FOR TRAINING PURPOSES ONLY

The Remote ACcess Equipment offers a Web interface to the OMC-R.


It provides the users with a convivial interface similar to the one of Graphic MMI
and all the functionality of the ROT is available on this new feature.
The RACE was first developed to replace the ROT, but it could also be used as a
particular OMC-R WorkStation.
The advantages of this new product are the following:
• The interface is user-friendly, it is close to the interface of the OMC-R. Thus,
the tool is easy to manipulate for the user used to the OMC-R interface.
• Compared to the ROT, which has been developed with tools that are now
obsolete, the RACE is implemented using new technologies, object oriented.
• The RACE is able to ensure a secure access to the network, which was no
longer guaranteed with the ROT.
• Thanks to the Web-oriented conception, operations and maintenance of radio
subsystems can be done from a remote site without requesting an OMC-R on-
site operator:
— by using PSTN and any kind of secured connection system,
— via BTS or BSC equipment using the BSC-OMC link within the BSS,
— through LAN.

6-36
Remote ACcess Equipment RACE
2 - Overview

Real time information

OMC-R
Mmi
WWW
Kernel
Server
HTTP
server
Web
browser

RACE Client OMC-R WorkStation/RACE server OMC-R Server

411-1075A-001.1603
6-37 November, 2006
FOR TRAINING PURPOSES ONLY

This new application is composed of Web pages and Java applets that can be run
through a Web navigator (Netscape or Internet Explorer).
This new application is adapted to individual operator needs: when the operator
must work from home, or when operations from BTS or BSC sites are required.
A better presentation of the data allows the customers to save time: for instance,
an operator had to modify a list of parameters and could make a mistake:
• with the ROT, it was mandatory to re-enter all the information,
• with the RACE, using the “Back” button of the navigator, he just has to modify
the wrong parameters.
The unique requirement to let this feature run is to have a Web browser, which
brings two advantages:
• all data are stored on the server and are downloaded at connection, so the
installation of a RACE client is done very quickly and then there is almost no
upgrading to be provided on the client side,
• the operator can use a PC to connect to the OMC-R; such an OMC-R station is
cheaper than a Unix station.
Finally the RACE can run on either an OMC-R WorkStation or an OMC-R server,
with a standard Internet browser for Unix.

6-37
Local Maintenance Terminal TML
1 - Overview

BSC 3000
Physical
path
HTTP HTML Manager
Server JAVA

Test S/W ATM


LAN Management Bus Manager
Test
server
Hardware
Interface Manager
Node
access

TML BSC 3000

411-1075A-001.1603
6-38 November, 2006
FOR TRAINING PURPOSES ONLY

The Local Maintenance Terminal or TML (Terminal de Maintenance Locale)


application is a java applet stored in the BSC disk.
The TML hardware is a PC: it works under Windows and behaves like a Java
browser.
The TML can be connected to the BSC OMU through Ethernet connections.
The TML can be plugged onto a hub that can be hosted in the SAI of the BSC
3000.
The TML interface is independent of the BSC 3000/TCU 3000 software evolutions.
The TML allows a first BSC 3000/TCU 3000 installation to be performed.
It allows the customization parameters of the BSC 3000/TCU 3000 to be read and
modified:
• BSC number,
• IP address,
• PCM type, etc..
The configuration information on the different hardware modules can be read from
the TML:
• board identification and states,
• software version,
• software and patch markers.

6-38
Local Maintenance Terminal TML
2 - Principle

http://mmm.ii.jjj.kk/BSC3000. html
WEB
Browser
HTTP
server
Download html page HTML
and Java applet JAVA

Try connection
TML
Send USER and PASSWORD
Application
Send commands Test
Server
Receive answers

TML PC 3000 Platform

411-1075A-001.1603
6-39 November, 2006
FOR TRAINING PURPOSES ONLY

Using a web browser, the TML operator loads an HTML page (through HTTP)
holding the TML applet. The TML applet is then downloaded to the TML PC using
the HTTP server.
Once the TML software is loaded in the TML PC, it is possible to start a test
session. The messages exchanged between the TML and the BSC are done
through a TCP/IP connection.
The TML communicates with the “Test server” software module.
The TML accesses the MIB for:
• modification of commissioning data:
— OMC-R link definition (IP, direct, …),
— PCM trunk setup,
— physical location definition (name, floor),
• checking software and hardware marking information.

6-39
Student notes:

6-40
BSC 3000 and TCU 3000
Provisioning

Section 7

nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY

7-1
Objectives

After this module of instruction, you will be able to:


> Provision a BSC 3000 and a TCU 3000 Cabinet
> Define
• TMU number in the Control Node of a BSC 3000 Cabinet
• LSA-RC in the Interface Node of a BSC 3000 Cabinet and of a TCU
3000 Cabinet
> Define TRM number in Transcoding Nodes of a TCU 3000
Cabinet

411-1075A-001.1603
7-2 November, 2006
FOR TRAINING PURPOSES ONLY

7-2
Contents

> BSC 12000HC and BSC 3000 Comparison


> TCU 2G and TCU 3000 Comparison
> BSC 3000 Provisioning
> Mixed TMU1/TMU2 Configurations
> TRM2 Dimensioning
> BSC 3000 Provisioning
> BSC 3000 and TCU 3000 Configurations

411-1075A-001.1603
7-3 November, 2006
FOR TRAINING PURPOSES ONLY

7-3
BSC 12000HC and BSC 3000 Comparison
Maximum values For one BSC 3000 For three BSC 12000HC For one BSC 12000HC
Erlang 3000 3600 1200
TRX 1000 960 320
Cells 600 480 160
BTS 500 414 138
LAPD links 567 120 40
SS7 links 16 18 6
E1/T1 links 126/168 144/144 48/48
A circuits 3112 3780 1260
Power consumption (kW) 2.0 5.1 1.7
W: 96/37 D: 60/23 W: 468/182 D: 60/23 W: 156/61 D: 60/23
Cabinet dimension (cm/in ) H: 200/78 H: 200/78
H: 220/86
Weight (kg/lb) 570/1254 1620/3564 540/1188
2
Floor load (kg/m _ lb/ft²) 1000/205 600/120 600/120
TCU 2G / TCU 3000 32 / 2 36 / 0 12 / 0

BSC 3000 vs BSC 12000HC BSC 3000 and


TCU 3000
(2 cabinets)

BSC 12000HC (6 cabinets )


411-1075A-001.1603
7-4 November, 2006
FOR TRAINING PURPOSES ONLY

Flexibility: there will be no predefined and limited number of configurations as for


the BSC 12000HC.
A set of product configurations fitting the needs of a given customer closely, in
terms of processing, signaling, PCM connectivity can be delivered, given that
these configurations remain within the minimum and maximum product
configurations.
For example, if a 2000 Erlang BSC with the same PCM and signaling connectivity
is delivered to two customers having networks with different traffic profiles, the
number of TMUs in each BSC can be different, say 8 TMUs for one and 10 for the
other.
A BSC 3000 of maximum capacity is ‘equivalent’ to three BSC 12000.

7-4
TCU 2G and TCU 3000 Comparison
Maximum values of For one TCU 3000 For thirty TCU 2G For one TCU 2G

20+67 for E1 links


Ater + A for E1/T1 links 30+120 1+4
24+89 for T1 links
A-circuits for E1/T1 links 3780 for E1 links 120 for E1 links
1944
2760 for T1 links 92 for T1 links

Power consumption (kW) 2.0 30 1

W: 96/37 D: 60/23 W: 78/30 D: 60/23 W: 78/30 D: 60/23


Cabinet dimension (cm/in)
H: 220/86 H: 200/78 H: 200/78
Weight (kg/lb)
570/1254 8490/18678 283/623

Floor load (kg/m2_ lb/ft²) 1000/205 600/120 600/120

TCU 2G (30 shelves)

TCU 3000 vs TCU 2G

BSC 12000HC (6 cabinets) BSC 3000 and TCU 3000 (2 cabinets)


411-1075A-001.1603
7-5 November, 2006
FOR TRAINING PURPOSES ONLY

7-5
BSC 3000 Provisioning
1 - BSC 3000 versus BSC 2G ATM SW

1+1

Control and Control and


Switching Switching OAM OMU
Chain A Chain B
Active Passive
1+1

TMU
300 E
Traffic N+P
Management

Total nbr of TMUs


1 2 3 4 5 6 7 8 9 10 11 12 13 14
GSM Object

SITE (w/ 1 LAPD Channel) 240 300 300 360 420 480 500 500 500 500
Not
SITE (w/ 2 LAPD Channel) standard 120 150 150 180 210 240 270 300 300 300
SITE (w/ 3 LAPD Channel) 80 100 100 120 140 160 180 200 200 200

N+P Redundancy
411-1075A-001.1603
7-6 November, 2006
FOR TRAINING PURPOSES ONLY

The redundancy concept of the BSC 3000 / TCU 3000 is different from the 2G
BSC/TCU.
We are not speaking any more about two redundant chains, but about a per
module or per card (LSA) redundancy.
Except for the TMU module for which an N+P redundancy is implemented, all the
modules are 1+1 redundant.
Entire Call Processing of the BSC 3000 is based on the TMU module and the
dimensioning for this module is based on the estimated traffic load (maximum 300
Erlang per TMU).
The estimated traffic for each site is calculated by the BSC 3000 by taking in
account the sum of each cell traffic (based on the number of TRX per cell).
The BSC 3000 estimates alone the number of TMU needed to reach the capacity
required by the Site/Cells/TRX configured by the OMC.
If the calculated number of TMUS is less than the installed TMUs, then the BSC
notifies to OMC via Load Balancing Anomaly how many TMUs are needed in
order to reach the required capacity.
Due to the fact that in case of the BSC 3000 there are no more fix configuration
(like type1...5 in case of BSC 2G), the OMC-R verify only the maximum
dimensioning of the BSC 3000.

7-6
BSC 3000 Provisioning
2 - Number of TMUs

– N is the minimum number of TMUs to run all active processes

– P is the minimum number of TMUs needed to run all the passive processes.

– 2 is the number of TMUs needed to run SS7 active and passives processes.

Capacity 600 900 1200 1500 1800 2100 2400 2700 3000
Number of TMU Erlang Erlang Erlang Erlang Erlang Erlang Erlang Erlang Erlang

M 2 3 4 5 6 7 8 9 10
P 1 1 1 1 2 2 2 2 2
SS7 2 2 2 2 2 2 2 2 2

Total 5 6 7 8 10 11 12 13 14

411-1075A-001.1603
7-7 November, 2006
FOR TRAINING PURPOSES ONLY

In order to have a well balanced processing load between TMUs, two mechanisms
have been implemented:
• The BSC 3000 makes a site distribution per Cell Group using a special
algorithm so that a number of equally charged Cell Groups can be obtained:
— a Cell Group is a logical entity containing several sites,
— all the Cells and the TRXs belonging to one site are in the same Cell
Group.
• The Cell Groups are then distributed over the existing TMUs. A TMU is
capable of managing up to 16 Cell Groups (8 active and 8 passive). The active
Cell Groups from one TMU will have their passive instance on another TMU.

Concerning TMU redundancy, let us define the following:


• M is the minimum number of TMUs needed to run active processes: (Fault
Tolerance).
• P is the minimum number of TMU needed to run passive processes.
• 2 is the number of TMUs needed to run active and passive SS7 processes.

Taking into account these considerations, the BSC 3000 capacity can be defined.

7-7
BSC 3000 Provisioning
3 - Abis LAPD Channels

LAPD link Dimensioning:


• Up to 567 LAPD channels per BSC 3000
• 62 LAPDs per the TMU module (2 for TCU LAPDs)
• Engineering recommendation = 40 BTS LAPDs per TMU
• Dependence between the number of TMUs and LAPDs:

Total number of
GSM Object TMUs 1 2 3 4 5 6 7 8 9 10 11 12 13 14

LAPD Channel (Abis) Not


120 180 240 300 300 360 420 480 540 567
standard

411-1075A-001.1603
7-8 November, 2006
FOR TRAINING PURPOSES ONLY

LAPD link dimensioning:


• Up to 567 LAPD channels can be configured at the BSC 3000 level.
• 62 LAPD can be handled by each TMU module (2 are reserved for TCU
LAPDs).
• The dependence between the number of TMUs and LAPDs that can be
handled by the BSC 3000 is presented in the table.
• The BSC 3000 engineering recommendation is 40 BTS LAPDs per TMU: this
engineering margin will provide enough processing capacity on the TMU or the
GPRS LAPDs.

7-8
BSC 3000 Provisioning
4 - TMU2

> Configurations allowed


1200 1500 1800 2100 2700 3000
Number of TMU2 vs Capacity 600 Erlang 900 Erlang 2400 Erlang
Erlang Erlang Erlang Erlang Erlang Erlang

N 2 2 3 3 4 4 5 6 6

P 1 1 1 1 1 1 1 1 1

SS7 2 2 2 2 2 2 2 2 2

Total 5 5 6 6 7 7 8 9 9

1050 1575 2100 2625 3000


Number of TMU2 vs Capacity 525 Erlang
Erlang Erlang Erlang Erlang Erlang

N 1 2 3 4 5 6

P 1 1 1 1 1 1

SS7 2 2 2 2 2 2

Total 4 5 6 7 8 9

> LAPD dimensioning


Total TMU2 number 1 2 3 4 5 6 7 8 9

LAPD Channel Not supported 110 220 330 440 550 567

411-1075A-001.1603
7-9 November, 2006
FOR TRAINING PURPOSES ONLY

TMU2 objective is to have a capacity being 1.75 times the current TMU1 one in
terms of Erlangs processing capabilities and twice in terms of signaling
(LAPD/SS7) ports offered. This means that the maximum BSC 3000 Erlangs
capacity may be reached with only 9 TMU2 (7 instead of 12 for GSM call
processing applications and 2 for SS7 management).

7-9
Mixed TMU1/TMU2 Configurations
> Erlang Capacity Vs TMU1 & TMU2 number

TMU2 300 Erlang redundancy


TMG 3000 600 Erlang redundancy
11
3000 525 Erlang redundancy
10
525 Erlang redundancy +
3000 3000 additional TMG TMU for
9
redundancy or to support
3000 3000 3000 harder call profiles
8
3000 3000 3000 3000 825 Erlang redundancy
7
825 Erlang redundancy +
3000 3000 3000 3000 3000 additional TMG TMU for
6
redundancy or to support
2625 2925 3000 3000 3000 3000 harder call profiles
5
2100 2400 2700 3000 3000 3000 3000
4
1575 1875 2175 2475 2775 3000 3000 3000
3
1050 1350 1650 1950 2250 2550 2850 3000 3000
2
525 825 1125 1425 1725 2025 2325 2625 2925 3000
1
0 NA 600 900 1200 1500 1800 2100 2400 2700 3000
0

0 1 2 3 4 5 6 7 8 TMU19 10
TMG
Erlang Capacity vs TMU1 and TMU2 number (SS7 TMU and redundant TMU not taken into account)
411-1075A-001.1603
7-10 November, 2006
FOR TRAINING PURPOSES ONLY

As TMU2 capacity is bigger than TMU1, dimensioning rules regarding the number
of needed TMUs for a chosen target erlang capacity are modified.

7-10
Mixed TMU1/TMU2 Configurations
> LAPD Capacity Vs TMU1 & TMU2 number

TMU2 300 Erlang redundancy


TMG 567 600 Erlang redundancy
11
567 525 Erlang redundancy
10
525 Erlang redundancy +
567 567 additional TMG TMU for
9
redundancy or to support
567 567 567 harder call profiles
8
567 567 567 567 825 Erlang redundancy
7
825 Erlang redundancy +
567 567 567 567 567 additional TMG TMU for
6
redundancy or to support
550 567 567 567 567 567 harder call profiles
5
440 490 548 567 567 567 567
4
330 380 438 492 546 567 567 567
3
220 270 328 382 436 490 567 567 567
2
110 160 218 272 326 380 470 530 567 567
1
108 162 216 270 360 420 480 567 567
0

0 1 2 3 4 5 6 7 8 TMU1
9 10
TMG
LAPD Capacity vs TMU1 and TMU2 number (SS7 TMU and redundant TMU not taken into account)
411-1075A-001.1603
7-11 November, 2006
FOR TRAINING PURPOSES ONLY

As TMU1/TMU2 mix configuration is supported, a faulty TMU board can be


replaced by a board from a different type. However, as board capacity depends on
its type, it has to be checked about the overall BSC capacity (for example, if a
faulty TMU2 is replaced by a TMU1, BSC capacity will decrease in terms of
supported erlangs and offered number of LAPD ports).
The type of a TMU installed in a Control Node cabinet is given in an explicit way
by the result of a “Display Marker” action.

7-11
TRM2 Dimensioning
Dimensioning figures:
• FR archipelago capacity: 96 circuits (vs 72 for TRM board)
• EFR archipelago capacity: 96 circuits (vs 72 for TRM board)
• AMR archipelago capacity: 96 circuits (vs 60 for TRM board)
• EFR_TTY archipelago capacity: 84 circuits (vs 48 for TRM board)
Thus the capacity of a TRM2 using three FR, EFR or AMR codec will be
288 circuits.

Nb of TRM2 Nb of Nb of
Nb of TCU Nb of TRM2 (with Nb of LSAs Nb of SS7 Capacity
w/o voice Ater
shelves/BSC redundancy) E1 channels (Erl)
redundancy) channels LAPD
1 1 1+1 1 288 3 2 247
1 2 2+1 2 576 4 4 521
1 3 3+1 2 864 5 4 798
1 4 4+1 3 1152 7 6 1078
1 5 5+1 3 1440 9 6 1359
1 6 6+1 4 1728 9 8 1641
1 7 7+1 4 1944 16 8 1923
Dimensioning for configurations without any EFR_TTY code configured

411-1075A-001.1603
7-12 November, 2006
FOR TRAINING PURPOSES ONLY

The dimensioning rules, regarding the number of needed TRM boards in a TCU
3000 cabinet, take into account the TRM capacity in terms of maximum number of
terrestrial circuits that can be managed.

7-12
BSC 3000 Provisioning
5 - GPRS Impact

MSC - Mobile Switching Centre


VLR - Visitor Location Register
VLR HLR - Home Location Register
TCU 3000 BSS - Base Station System
V15 MSC PSTN/ EIR - Equipment Identity Register
ISDN
BTS
A
MAP-D

Abis HLR
SMSC EIR
Ater

Agprs

PSPDN
BSC 3000
V15
PCUSN SGSN GGSN
PCUSN - Packet Control Unit Support Node
PSPDN - Packet Switched Public Data Network
SGSN - Serving GPRS Support Node
GGSN - Gateway GPRS Support Node
411-1075A-001.1603
7-13 November, 2006
FOR TRAINING PURPOSES ONLY

GPRS entails no BSC capacity decrease in terms of processing. In other words, the
processing power of the TMU and of the other processing boards is not a limiting factor
for GPRS dimensioning.
Only the PCM connectivity (Abis + Ater + Agprs) and the circuit switching capacity of the
BSC 3000 have to be taken into account for GSM and GPRS network engineering.
In urban areas, the BSC 3000 has enough PCMs available so that the GPRS introduction
can be done without any PCM dimensioning constraints.
For example a maximum capacity BSC 3000 managing a BSS network made mainly of
S444 BTSs, will need around 90 PCMs for Abis and Ater, out of 126.
Therefore, whatever the GPRS profile is, there will be enough additional PCMs available
for Agprs.
In rural areas (BTS S111 & S222), all PCMs might be used for voice service only.
The introduction of GPRS can then impact the BSC 3000 capacity in terms of the number
of managed BTSs & TRXs.
The maximum circuit switching capacity of the BSC 3000 (2268 64-kbit/s circuits) shall be
taken into account in the dimensioning of a voice + GPRS network.
The switching capacity is not a limitation for voice-only and for low-speed GPRS services
(CS1/CS2).
For high-speed data services, since the radio time-slots carrying those services require
more circuits on Abis and Agprs (2 to 4 times more than for voice and low-speed packet
data), the BSC 3000 switching capacity limit can be reached for some network
configurations, especially for high data penetration (for example 8 radio TS per cell for
GPRS).
The impact on BSC 3000 capacity in terms of the number of managed TRX has to be
determined on a case-by-case basis, according to the network configuration.

7-13
BSC 3000 and TCU 3000 Configurations
1 - Min and Max Configurations

BSC 3000 and TCU 3000 Min Max


dimensioning
Erlang 600 3000
TRX 360 1000
BTS 120 500
Cells 360 600
LAPD links 120 567
E1/T1 PCM (BSC 3000) 42/56 126/168
E1/T1 PCM (TCU 3000) 21/28 84/112
A interface circuits (TCU 3000) 200 1944
A interface circuits (BSC 3000) 620 3112
SS7 links 3 16

411-1075A-001.1603
7-14 November, 2006
FOR TRAINING PURPOSES ONLY

This table gives the dimensioning factors for the BSC 3000 & TCU 3000 in
minimum and maximum configurations.
BSC configuration
• The minimum configuration is a 600 E, which translates to 3 TMUs (2+1 for
redundancy) and 2 LSAs (42 E1 or 56 T1 PCMs).
• The maximum configuration is a 3000 E, which translates to 12 TMUs (10+2
for redundancy) and 6 LSAs (126 E1 or 168 T1 PCMs); the TCU function will
require two Transcoding nodes.
Between these two configurations, all configurations can be offered, never less
some product engineering rules are defined to avoid inconsistency between the
number of TMUs and the number of LSAs.
TCU configuration
• The minimum configuration is a 200 E TCU 3000, which translates, in the case
of Enhanced Full Rate, to 2 TRM modules (1+1 redundant) and 1 LSA (21 E1
or 28 T1 PCMs).
• The maximum configuration is a 1800 E: up to 11 TRMs (10+1 redundant) and
4 LSAs in each of the 2 nodes of a TCU cabinet. The TCU 3000 cabinet can
be connected to the same BSC or to 2 different BSCs.
Note: The TCU 3000 can have a maximum of 12 TRMs modules if required.
Between these minimum and maximum configurations, all configurations can be
offered. Nevertheless, in the TCU 3000 the number of TRMs and the number of
LSAs are directly related to the required A interface capacity.

7-14
BSC 3000 and TCU 3000 Configurations
2 - BSC 3000 and TCU 3000 Typical Examples

BSC 3000 600E 1500E 2400E 3000E


TMU Traffic 2+1 5+1 8+2 10+2
TMU SS7 1+1 1+1 1+1 1+1
LSA 2 3 5 6
Nb of LAPD 120 300 480 567
Nb of E1/T1 42/56 63/84 105/140 126/168
TCU 3000 200E 600E 1200E 1800E
TRM 1+1 3+1 7+1 9+1
LSA 1 2 3 4
Nb of E1/T1 21/28 42/56 63/84 84/112

411-1075A-001.1603
7-15 November, 2006
FOR TRAINING PURPOSES ONLY

Nortel Networks will define some market model configurations (rural, semi-urban,
urban, etc.) and some optional extension kits (comprised of TMU, TRM, LSA) in
order to satisfy most of the product configurations required by customers:
• a rural type of configuration, with a relatively low number of TMUs (because
the traffic capacity is low) and a maximum number of LSAs (because many
small BTSs used for coverage need to be connected),
• an urban type of configuration, with a high number of TMUs (high traffic
capacity) and a relatively low number of LSAs (because BTSs have many
TRXs per cell, and there are relatively few BTSs to be connected to the BSC).
Market models and market packages are defined both to optimize the end-to-end
supply chain from the order to the delivery of the products to the customer, and to
satisfy most of the configurations requested by the customers.
The market packages allows a market model to be modified by adding extension
kits, to fit as closely as possible to the customer request.

7-15
Student notes:

7-16
Exercises Solutions

Section 8

nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY

8-1
Objectives

> After this module of instruction, you will be able to


understand the main data flows for BSC 3000 and TCU
3000:
• Traffic (Circuit switch and Packet switch)
• GSM Path Signaling
• Call Processing Signalization
• OA&M

411-1075A-001.1603
8-2 November, 2006
FOR TRAINING PURPOSES ONLY

8-2
Contents

> Internal BSC Dialogues


> Traffic (Circuit and Packet Switch) Path
> GSM Signaling Path
> BSC 3000/TCU 3000 Dialogue

411-1075A-001.1603
8-3 November, 2006
FOR TRAINING PURPOSES ONLY

8-3
Internal BSC Dialogues
Control Node
OAM OMU
OAM OMU MMS TMU TMU TMU
MMS
ATM SW ATM SW
Traffic Traffic Traffic
Management Management Management

Plane 1
Plane 2
Interface Node
Switching Unit
ATM RM ATM RM
CEM
ATM/PCM ATM/PCM
Interface 64 kb/s Interface

LSARC LSARC

LSARC LSARC
8K RM
To To
PCM PCM
BTSs Controller TCUs
Controller 8 kb/s

411-1075A-001.1603
8-4 November, 2006
FOR TRAINING PURPOSES ONLY

Two paths are established simultaneously using the two planes.

8-4
Internal BSC Dialogues
Control Node
OAM OMU
OAM OMU MMS TMU TMU TMU
MMS
ATM SW ATM SW
Traffic Traffic Traffic
Management Management Management

Plane 1 Plane 2

Switching Unit
ATM RM ATM RM
CEM
ATM/PCM ATM/PCM
Interface 64 kb/s Interface

LSARC LSARC

LSARC LSARC
8K RM
To To
PCM PCM
BTSs Controller TCUs
Controller 8 kb/s

Interface Node
411-1075A-001.1603
8-5 November, 2006
FOR TRAINING PURPOSES ONLY

Two paths are established simultaneously using the two planes.

8-5
Traffic (Circuit and Packet Switch) Path
TMU TMU

Traffic Traffic
Management Management
OAM OMU
PCU
ATM SW ATM SW

Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
BTS

LSARC LSARC LSARC LSARC


8K RM
PCM PCM PCM PCM
TRM
Controller Controller Controller Controller
Vocoders
MSC

411-1075A-001.1603
8-6 November, 2006
FOR TRAINING PURPOSES ONLY

8-6
GSM Signaling Path
TMU TMU

Traffic Traffic
Management Management BTS LAPD signaling
OAM OMU
Full TS LAPD signaling
ATM SW ATM SW LAPD signaling on ATM

Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface

BTS
LSARC LSARC LSARC LSARC
8K RM
PCM PCM PCM TRM PCM
Controller Controller Controller Controller
Vocoders
MSC

411-1075A-001.1603
8-7 November, 2006
FOR TRAINING PURPOSES ONLY

8-7
GSM Signaling Path
MTP1 & MTP2: TMU A
TMU TMU
MTP3 & SCCP: TMU B
A B
Traffic Traffic
Management Management
OAM OMU

ATM SW ATM SW SS7


BSC
Control Node
TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
64 kb/s

LSARC LSARC LSARC LSARC


8K RM
PCM PCM PCM PCM
8K RM TRM
Controller Controller Controller Controller
8 kb/s MSC
BTS Vocoders
8 kb/s

411-1075A-001.1603
8-8 November, 2006
FOR TRAINING PURPOSES ONLY

For SS7 signaling, always two TMUs are involved.


One TMU will manage MTP1 and MTP2 layer.
The second TMU will manage the following layer of SS7 Signaling (MTP3, SCCP).

8-8
BSC 3000/TCU 3000 Dialogue
TMU TMU
OA&M and
Call Processing (1/2)
Traffic Traffic
Management Management
OAM OMU
Same TMU
ATM SW ATM SW

Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
64 kb/s 64 kb/s

LSARC LSARC LSARC LSARC


8K RM MSC
PCM PCM PCM PCM
TRM
Controller Controller Controller Controller
8 kb/s
BTS Vocoders

411-1075A-001.1603
8-9 November, 2006
FOR TRAINING PURPOSES ONLY

8-9
BSC 3000/TCU 3000 Dialogue
TMU TMU
OA&M and
Call Processing (2/2)
Traffic Traffic
Management Management
OAM OMU
Different TMUs
ATM SW ATM SW

Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface

LSARC LSARC LSARC LSARC


8K RM MSC
PCM PCM PCM PCM
TRM
Controller Controller Controller Controller
BTS Vocoders

411-1075A-001.1603
8-10 November, 2006
FOR TRAINING PURPOSES ONLY

8-10
Student notes:

8-11
Student notes:

8-12

Das könnte Ihnen auch gefallen