Beruflich Dokumente
Kultur Dokumente
Description - Technical
1075A
Student Guide
Guide release: 16.03
Guide status: Standard
Date: November, 2006
411-1075A-001.1603
The information contained in this document is the property of Nortel Networks. Except as specifically
authorized in writing by Nortel Networks, the holder of this document shall not copy or otherwise
reproduce, or modify, in whole or in part, this document or the information contained herein. The
holder of this document shall protect the information contained herein from disclosure and
dissemination to third parties and use the information solely for the training of authorized individuals.
THE INFORMATION PROVIDED HEREIN IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
KIND. NORTEL NETWORKS DISCLAIMS ALL WARRANTIES, EITHER EXPRESSED OR
IMPLIED, INCLUDING THE WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE. IN NO EVENT SHALL NORTEL NETWORKS BE LIABLE FOR ANY
DAMAGES WHATSOEVER, INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL,
LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, ARISING OUT OF YOUR USE OR
RELIANCE ON THIS MATERIAL, EVEN IF NORTEL NETWORKS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
Overview
Description
This course is a comprehensive technical description of the BSC3000
and TCU3000 products. This course applies to V16 release of the
BSS.
Intended audience
This course is designed for people who need to know the functions
and architecture of the BSC3000 and TCU3000.
Prerequisites
This course has the following prerequisites:
• 1061A: GSM GPRS System Overview - Technical
Objectives
After completing this course, you will be able to:
• Describe the physical and functional architecture of the BSC 3000
and TCU 3000,
• Describe module functions and interfaces,
• Trace the signaling and traffic paths inside and outside the
equipment.
1. Introduction
8. Exercises Solutions
Introduction
nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY
1-1
About Knowledge Services
411-1075A-001.1603
1-2 November, 2006
FOR TRAINING PURPOSES ONLY
1-2
Nortel Homepage
411-1075A-001.1603
1-3 November, 2006
FOR TRAINING PURPOSES ONLY
www.nortel.com
1-3
Training & Certification Page
411-1075A-001.1603
1-4 November, 2006
FOR TRAINING PURPOSES ONLY
www.nortel.com
• Select Training
• Select the appropriate product family …
• …Choose a product…
• …And get the content
Select the appropriate geographic region and language - allows you to customize
your view
Point of Contacts:
• CAMs (Customer Account Managers) – The customer can direct
questions/issues to their internal training prime, who can be in contact with the
Nortel CAM.
• CSRs (Customer Service Rep) of regional calling center number
• Instructor – provide business cards/email address/phone number
1-4
Training Page
411-1075A-001.1603
1-5 November, 2006
FOR TRAINING PURPOSES ONLY
1-5
Curriculum Paths Page
411-1075A-001.1603
1-6 November, 2006
FOR TRAINING PURPOSES ONLY
1-6
Technical Documentation
411-1075A-001.1603
1-7 November, 2006
FOR TRAINING PURPOSES ONLY
www.nortel.com
Select Support & Training
Select Technical Documentation
1-7
GSM BSS Nortel Technical Publications
BSS Documentation New in this Release BSS CT2000 BSS Fundamentals - BTS S12000
roadmap OMC-R Commands OMC-R Commands Troubleshooting
411-9001-088 Fundamentals Operating Principles
411-9001-000 411-9001-137 Reference - Reference- 411-9001-144
BSS Overview Security, Administration, 411-9001-007 Configuration, Fault Management -
411-9001-001 WPS for PCUSN SMS-CB, and Help Maintenance Principles
Configuration Procedures BSS Configuration - Performance, and
menus 411-9001-039
OMC-R Fundamentals 411-9001-201 Operating Procedures Maintenance menus
411-9001-006 411-9001-130 411-9001-129 BTS 18000
WPS for PCUSN 411-9001-034 Troubleshooting
BTS 411-9001-162
S8000/S8002/S8003/ Installation& BSS Performance
Administration OMC-R Commands Management -
S8006 Fundamentals BTS
411-9001-063 411-9001-202 Reference – Observation Counters
Dictionary S8000/S8002/S8003/
BTS e-cell Fundamentals BSS CT2000 Objects and Fault menus 411-9001-125 S8006 Fault Clearing
411-9001-092 Configuration - 411-9001-128 411-9001-103
PCUSN Fundamentals Procedures BSS Performance
BSS Parameter Management - BSS Fault Clearing
411-9001-091 411-9001-148 Reference Observation Counters
BSC 3000/TCU 3000 CT2000 Configuration 411-9001-124 Fundamentals Advanced Maintenance
Fundamentals Reference RACE Fundamentals and 411-9001-133 Procedures
411-9001-126 411-9001-804 Commands Reference TML (BTS) 411-9001-105
BTS S12000 411-9001-127 Commissioning and Fault
Fundamentals CT2000 Installation& Management PCUSN Fault Clearing
411-9001-142 Administration 411-9001-051 411-9001-106
BTS 18000 411-9001-149
TML (BSC 3000/TCU BTS S12000 Fault
Fundamentals 3000) Commissioning Clearing
411-9001-160 and Fault Management 411-9001-143
WPS for PCUSN 411-9001-139
BTS 18000
Fundamentals OMC-R Routine Fault Clearing
411-9001-802 Maintenance and 411-9001-161
BSS Terminology Troubleshooting Call Trace/Call Path
411-9001-032 Trace Analyzer
411-9001-803
BSC 3000/TCU 3000 Performance
WQA Fundamentals Management
411-9001-205 Troubleshooting 411-9001-060
411-9001-132
BSC 3000/TCU 3000
e-cell Troubleshooting Fault Clearing
411-9001-090 411-9001-131
BTS S8000/S8003 WQA Installation and
Troubleshooting Administration
411-9001-048 411-9001-206
WQA Configuration
BTS S8002 Procedures
Troubleshooting 411-9001-207
411-9001-084
411-1075A-001.1603
1-8 November, 2006
FOR TRAINING PURPOSES ONLY
1-8
Course Objectives
411-1075A-001.1603
1-9 November, 2006
FOR TRAINING PURPOSES ONLY
1-9
Course Contents
> Introduction
> BSC 3000 and TCU 3000 Presentation
> BSC 3000 and TCU 3000 Architecture
> Data Flow Exercises
> BSC 3000 and TCU 3000 Operation
> BSC 3000 and TCU 3000 Maintenance and Enhanced
Exploitability
> BSC 3000 and TCU 3000 Provisioning
> Exercises Solutions
411-1075A-001.1603
1-10 November, 2006
FOR TRAINING PURPOSES ONLY
1-10
Student notes:
1-11
Student notes:
1-12
BSC 3000 and TCU 3000
Presentation
Section 2
nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY
2-1
Objectives
411-1075A-001.1603
2-2 November, 2006
FOR TRAINING PURPOSES ONLY
2-2
Contents
411-1075A-001.1603
2-3 November, 2006
FOR TRAINING PURPOSES ONLY
2-3
BSS in GSM Network
Public Switched S8000
BSS
Telephone Network Outdoor
TRAU BTS
e-Cell
(TCU)
BTS
NSS
A Interface Radio
MSC Interface
Ater Interface
S12000
BSC Indoor
OMC-R BTS
MS
Abis Interface BTS
18010 Radio
OMN Interface
Sun
StorEdge A5000 Interface
Agprs Interface
GPRS BTS
18020
Core Combo
Network Gb Interface
Internet PCUSN
MS
411-1075A-001.1603
2-4 November, 2006
FOR TRAINING PURPOSES ONLY
The Base Station Subsystem includes the equipment and functions related to the
management of the connection on the radio path.
It mainly consists of one Base Station Controller (BSC), and several Base
Transceiver Stations (BTSs), linked by the Abis interface.
An optional equipment, the Transcoder/Rate Adapter Unit (TRAU) so called
TransCoder Unit (TCU) within Nortel Networks BSS products, is designed to
reduce the number of PCM links.
These different units are linked together through specific BSS interfaces:
• Each BTS is linked to the BSC by an Abis interface.
• The TCUs are linked to the BSC by an Ater interface.
• The A interface links the BSC/TCU pair to the MSC.
• The Agprs interface links the BSC to the PCUSN.
2-4
BSC Functions
1 - Basic Functions
Terrestrial Resources Management
BTS
MSC
BTS
Routing
BTS
BTS
Traffic Concentration
SMS-CB
Management
CAUTION: CRASH
ON E12 HIGHWAY
411-1075A-001.1603
2-5 November, 2006
FOR TRAINING PURPOSES ONLY
2-5
BSC Functions
2 - OA&M Functions
BTS and TCU Management
Shut down
Startup
Supervision Observation
Data +
Software
Ethernet
411-1075A-001.1603
2-6 November, 2006
FOR TRAINING PURPOSES ONLY
2-6
TCU Functions
4 speech channels
+ signaling
or 4 data channels Converts the GSM speech frames into
PSTN/ISDN A-Law or µ-Law speech.
411-1075A-001.1603
2-7 November, 2006
FOR TRAINING PURPOSES ONLY
2-7
BSC 3000 and TCU 3000
1 - Physical Presentation
411-1075A-001.1603
2-8 November, 2006
FOR TRAINING PURPOSES ONLY
The BSC 3000 and the TCU 3000 are one-cabinet equipment assemblies,
composed of two Nodes and one Service Area Interface.
These Nodes are each housed in a sub rack comprising two shelves.
The cabinet is designed for indoor applications.
The design allows a front access to the equipment.
External cabling from below or above is supported.
The Service Area Interface or SAI is installed on the left side of the cabinet:
• it provides front access to the PCM cabling,
• it contains electrical equipment used to interface the BSC or the TCU and the
customer cables.
The product is EMC compliant. No rack enclosure is required for this reason, as
EMC compliance is achieved at the sub rack level (Control Node, Interface Node
and Transcoding Node).
2-8
BSC 3000 and TCU 3000
2 - Physical Description
BSC 3000 TCU 3000
Power Supplies
Fans
Service Service
Control Transcoding
Area Area
Node Node
Interface Interface
(PCM cabling) (PCM cabling)
Fans
Interface Transcoding
Node Node
411-1075A-001.1603
2-9 November, 2006
FOR TRAINING PURPOSES ONLY
The BSC 3000 is a one-cabinet equipment, composed of two Nodes and one
Service Area Interface.
The two BSC 3000 Nodes are the Control Node and the Interface Node.
In addition, the Control Node (in charge of Call Processing and OA&M) of the BSC
3000 implements a Fault Tolerant architecture, based on redundancy of processes
and a load balancing mechanism on the processors, allowing fast recovery of
service (within a few seconds) after a hardware failure.
The TCU 3000 is a one-cabinet equipment, composed of up to two Transcoding
Nodes and one Service Area Interface.
The power supply for both the BSC 3000 and TCU 3000 is –48 V dc.
The maximum power consumption of the BSC 3000 or TCU 3000 is 2 kW.
Each Node (sub rack) is powered by one rack power distribution tray.
Each sub rack is cooled by four fans (replaceable). The fan rack is also referred to
as the Cooling Unit assembly.
2-9
BSC 3000 and TCU 3000
3 - Mixed System Architecture
OMC-R
X.25 V15
Ethernet
BSC 2G
V14.3 BSC 3000 BSC 3000
V15 V15
BTSs
BTSs BTSs
V12.4
V15 V15
V14.3
411-1075A-001.1603
2-10 November, 2006
FOR TRAINING PURPOSES ONLY
The V15 release will introduce the enhanced capacity on the BSC 3000 for Edge
functionalities.
The BSC 3000 and TCU 3000 are intended to interwork with current BSC 12000,
BTS and OMC-R products.
These products will require a software upgrade to deal with the BSC 3000 and
TCU 3000.
The OMC-R/BSC 3000 link is TCP/IP over Ethernet, instead of native X.25 for
BSC 12000.
The OMC-R/BSC 3000 link over A/Ater Interface is not available in V15.1.1
2-10
BSC 3000 and TCU 3000 Hardware
1 - Cabinet Structure and Cooling
900
600
300 600
2200
2-11
BSC 3000 and TCU 3000 Hardware
2 - Generic Module
411-1075A-001.1603
2-12 November, 2006
FOR TRAINING PURPOSES ONLY
2-12
Student notes:
2-13
Student notes:
2-14
BSC 3000 and TCU 3000
Architecture
Section 3
nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY
3-1
Objectives
411-1075A-001.1603
3-2 November, 2006
FOR TRAINING PURPOSES ONLY
3-2
Contents
411-1075A-001.1603
3-3 November, 2006
FOR TRAINING PURPOSES ONLY
3-3
BSC/TCU 3000: External Links
PCUSN
OMC-R
ML
GPRS
LAPD O
RSL
LAPD
Agprs
Ethernet
LAPD GSL
Data
LAPD
OML LAPD
LAPD OML
RSL
LAPD
GSL
SS7
Voice
Data
Abis Ater
411-1075A-001.1603
3-4 November, 2006
FOR TRAINING PURPOSES ONLY
3-4
BSC 3000 and TCU 3000 Generic Architecture
Control
OMU TMU
Node MMS OAM OMU
OAM
Private Traffic
Management
MMS
ATM SW
ATM SW TMU
Shared
Traffic
Management
Transcoding
Interface LSA RC Node
ATM RM LSA RC
Node TRM
ATM RM
CEM
CEM
8K RM CEM
CEM LSA RC
8K RM LSA RC TRM
411-1075A-001.1603
3-5 November, 2006
FOR TRAINING PURPOSES ONLY
The BSC 3000 is composed of the Control Node and the Interface Node.
The TCU 3000 is composed of the Transcoding Node.
Control Node main functionalities are:
• Management of OAM for the C-Node, I-Node and T-Node,
• Traffic management towards the BTSs and MSC,
• BTS supervision, Transcoding Node supervision,
• OMC-R link management,
• Failure detection and processing,
• HandOver procedures,
• BSS configuration and software management,
• BSS performance counter management,
• ATM Management.
Interface Node main functionalities are:
• I-Node OAM management,
• Switch management and Timeswitch control,
• PCM interface,
• ATM Management.
Transcoding Node main functionalities are:
• T-Node OAM management,
• Switch management and call processing,
• BSC Access,
• Carrier Maintenance.
3-5
BSC 3000: Control Node
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
ATM SW
ATM SW
- Filler -
Control
SIM B
OMU
OMU
TMU
TMU
TMU
TMU
TMU
TMU
TMU
Node Shelf 1
MMS Shared
MMS Shared
MMS Private
MMS Private
Shelf 0
- Filler -
SIM A
TMU
TMU
TMU
TMU
TMU
TMU
TMU
- Filler -
- Filler -
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
411-1075A-001.1603
3-6 November, 2006
FOR TRAINING PURPOSES ONLY
3-6
Control Node: ATM Platform
Control Interface
OAM OMU Node Node
OAM OMU
Plane 1 ATM RM
ATM SW
ATM/PCM
Interface
CEM
ATM Links
(25 Mbit/s) ATM Links S-links
(155 Mbit/s) 64 kbit/s
TMU
TMU ATM SW Plane 2 ATM RM
TMU
ATM/PCM
TMU
Interface
Traffic
Management
411-1075A-001.1603
3-7 November, 2006
FOR TRAINING PURPOSES ONLY
The Control Node is a computing and signaling platform built around an ATM
switch.
Globally, the Control Node is designed as a fully redundant ATM switch for any
inside and outside communications.
Internal and external exchanges are carried over ATM through a redundant optical
OC3 connection using ATM at 155 Mbps:
• for internal communication between the Control and the Interface Nodes.
The platform is also fully ATM inside; no ATM connection is ended on the access
port of the Control Node (ATM switches), but on any computing module inside the
shelf. The addressing to/from the Control Node is based on Vpi, Vci.
ATM RMs and ATM SW modules are provisioned in pairs to provide redundancy
and connection protection:
• both planes are used at the same time,
• all messages exchanged between ATM RMs and ATM SW modules are
duplicated.
3-7
Control Node Architecture
OAM OMU MMS OAM OMU
OMC-R OMC-R
MMS
MMS MMS
SCSI
SCSI
Passive Active Interface
Interface
411-1075A-001.1603
3-8 November, 2006
FOR TRAINING PURPOSES ONLY
3-8
Operation and Maintenance Unit
1 - Overview
TMU OMU MMS
Disk
Traffic
Operation Management
Management
Administration
&
Maintenance
ATM SW
OMC-R
Control Interface
Node
OMC-R
Interface
Node
TMU
TML
TML
Interface
Traffic
Management
411-1075A-001.1603
3-9 November, 2006
FOR TRAINING PURPOSES ONLY
The Operation and Maintenance Unit module is responsible for the following
functions:
• management of all BSC resources (both Control and Interface Nodes),
• BSS interface with the OMC-R (Ethernet),
• OMC link management either by a physical serial link or constant bit rate data
sent to the ATM datalink,
• disk management,
• Local Maintenance Terminal (TML).
The OMU is provisioned in a 1+1 redundancy scheme.
3-9
Operation and Maintenance Unit
2 - OA&M Functions
BSC OMU
OA&M
Performance
Management
Configuration Fault
Management Management OMC-R
TMU TMU
BTS BTS
CEM LSA RC
OA&M OA&M
3-10
Operation and Maintenance Unit
3 - BSS Interface with OMC-R
BSC OMC
OAM OAM
RJ45
RS232
(Debug) Association Association
(proprietary) (proprietary)
TCP TCP
IP IP
Ethernet Ethernet
TCP/IP
Network
411-1075A-001.1603
3-11 November, 2006
FOR TRAINING PURPOSES ONLY
Though the same OMC-R manages both the BSC 2G and the BSC 3000, the
interface between the BSC 3000 and the OMC-R is Ethernet TCP/IP, instead of
X.25 as for the BSC 2G.
Two data paths are available for OMC-R access and/or other purposes:
• PCM: on one or more TS (DS0) via the LSA RC module, (available in V15)
• Ethernet: TCP/IP on Ethernet 10/100 Mbps.
The direct Ethernet connection is provided by the RJ45 connector of the OMU
faceplate.
A switching device or four-ports LAN Hub, located in the SAI, is required.
A small sub layer based on IETF RFC 1006, allows dialog with Association
(proprietary) and Application layers.
When the BSC 3000 is remote from the OMC-R, they can be interconnected
through a network (X.25, Frame relay, etc.) with a minimum throughput of
128 kbps.
3-11
Mass Memory Storage
1 - SCSI Bus
Control
Node
SCSI Bus
OAM OMU OAM OMU
Active Passive
411-1075A-001.1603
3-12 November, 2006
FOR TRAINING PURPOSES ONLY
There are four Mass Memory Storage modules (hard disk) in the BSC.
They are linked to the OMU modules through four SCSI buses.
Two SCSI buses are dedicated to the two private disks storing:
• OS AIX (400 Mb),
• software for OMU boards.
Two of them are for mirrored shared disks storing:
• local MIB (BDA),
• observations, notifications,
• Call traces,
• Supervision,
• BTS and TCU softwares.
The pair of shared SCSI buses, and the disks on them are only managed by the
active OMU.
The shared SCSI buses will only be accessed after “election” of the active OMU.
When a switch of activity occurs (fault tolerance mechanism), the newly active
OMU gains control of the pair of shared SCSI buses.
3-12
Mass Memory Storage
2 - Disk Sub-system
Control
Node
1 SCSI Bus 2
OAM OMU OAM OMU
Active Passive
Each OMU module controls a private disk which holds all the private data (OS and
System data) for the module and a pair of shared disks (BSS database and GSM
data) managed in a mirroring way.
Each Mass Memory Storage module contains a SCSI-2 hard disk of 9 Gbytes
each.
At boot time, each OMU module has access to its private SCSI and so to its
private disk.
The pair of shared disks holds the data that must be secured and still be
accessible in the event of an OMU failure or a disk failure.
The protection of the shared disks is independent from the protection of the
OMUs: the non active OMU can be extracted from the system without any impact
on the disk transactions.
In the event of the extraction of the active OMU, a swact of the OMUs occurs, and
the disk subsystem is still protected from a single failure.
3-13
Mass Memory Storage
3 - MMS2 Introduction
New disk
SCSI expander
(73 Gb from Hitachi)
Activation is slot
LVD SCSI LVD SCSI dependent
Terminator Terminator
End of SCSI bus
Backplane
LED drive
DC/DC
converter
-48 V
411-1075A-001.1603
3-14 November, 2006
FOR TRAINING PURPOSES ONLY
MMS2 HW presentation:
• Bigger capacity disk: 73 GB (vs 9 GB for MMS1)
• MMS2 boards are replacing MMS1 boards with the same functionality.
3-14
Mass Memory Storage
4 - MMS2 and Higher Capacity Disk
Front
Panel
View
Removal Request
Push Button
73 Gb
X 9 Gb/ 36 Gb flanged
411-1075A-001.1603
3-15 November, 2006
FOR TRAINING PURPOSES ONLY
The current MMS1 module (9 GB) houses a SCSI hard disk. The new MMS2
disk, introduced in V15.1, is a 73 GB device and it uses the same SCSI interface
as the 9 GB disk.
3-15
Traffic Management Unit
1 - Main Functions
TMU
Traffic Management:
Radio resource: TMG_RAD
Connection (setup, release, HO): TMG_CNX
A interf. Messages (paging, incoming HO): TMG_MES
Agprs interf. Messages: TMG_RPP
SS7 Management:
LAPD Management:
SCCP
Level 1, 2 and 3
MTP1, MTP2, MTP3
411-1075A-001.1603
3-16 November, 2006
FOR TRAINING PURPOSES ONLY
The TMU is responsible for the BTS configuration and the main Call Processing functions:
• GSM/GPRS traffic management,
• GSM signaling (LAPD and SS7),
• GPRS signaling (LAPD).
These functions are processed by six software modules.
TMG_RAD:
• manages radio resources for a group of sites: allocation, modification and release of
radio channels,
• manages the RSL dialog on the Abis and radio interfaces,
• supervises coherence of allocated channels between the BTSs and the BSC.
TMG_CNX:
• drives setup, release, assignment and handover,
• asks for traffic connections.
TMG_MES:
• codes/decodes A interface messages,
• drives connectionless messages: paging, incoming HO.
TMG_RPP: codes/decodes Agprs interface messages.
TMG_COM: allocation, release and administration of terrestrial circuits (CICs).
SPR: Supervision of BTS sites (configuration and defense).
For reliability purpose, the main Call Processing sub-functions use the Fault Tolerance
service: for each sub-function there is one active entity on a TMU and one passive entity
on another TMU.
3-16
Traffic Management Unit
2 - Call Processing and Traffic Management
Call Processing
Resource Allocator
411-1075A-001.1603
3-17 November, 2006
FOR TRAINING PURPOSES ONLY
The Traffic Management Unit (TMU) is responsible for managing the GSM
protocols in a large acceptance:
• provide processing power for GSM Call Processing,
• terminate GSM protocols (A, Abis and Ater interfaces),
• terminate low level GSM protocols (LAPD and SS7).
The GSM Call Processing function is responsible for the management of GSM
communications:
• traffic management (connections and transfer of user information MS/MSC),
• network resource allocation (terrestrial circuits and radio resources),
• handover,
• radio measurements,
• power control.
The corresponding software is spread over all TMU modules, but is split into
several entities:
• radio resource allocation: per BTS site,
• terrestrial circuit allocation: per TCU and per PCUSN,
• MSC connection and BSC transaction: internal criteria.
3-17
Traffic Management Unit
3 - TMU2 Introduction
TMU2 HW Presentation:
Flash 2 MB One Single board (TM+SBC+PMC).
8 MB SSRAM NTQE04BA
411-1075A-001.1603
3-18 November, 2006
FOR TRAINING PURPOSES ONLY
The distinction between TMU1 and TMU2 will be made thanks to a different
PecCode value.
3-18
ATM Subsystem
ATM 25 Interface Distribution
Control
Node
ATM Links (25 Mbit/s)
ATM SW ATM SW
ATM ATM
Switch Switch
Active Active
Traffic Traffic
Active Management Management
Passive
411-1075A-001.1603
3-19 November, 2006
FOR TRAINING PURPOSES ONLY
The Control Node uses a duplex, star connectivity, with cell switching performed in
both ATM SW modules at the center of the stars and the other Resource Modules
at the leaves.
From a hardware perspective, the ATM subsystem is the key factor for platform
robustness and scalability.
This subsystem provides reliable backplane board interconnections with live
insertion capabilities. It has two main components:
• a pair of ATM switches (ATM SW module), working simultaneously,
• an ATM Adapter, located in each of the OMU and TMU modules.
The connections between modules use redundant ATM 25 point to point
connections to ATM switches, allowing:
• high fault isolation, signal integrity,
• live insertion,
• backplane redundancy,
• scalability.
The backplane supports a redundant ATM 25 Mbps to any slot using the ATM 25
standard as defined by the ATM Forum.
It carries all the internal signaling information, using the AAL1 and AAL5 protocols.
3-19
ATM Switch Module
1 - Functions
ATM SW
AAL5- AAL1 Messaging
MUX
SAR Communication
2x
ATM Links ATM routing table
(25 Mbit/s)
OA&M
OA&M OMU
OA&M OMU
4x MUX
ATM Links
(25 Mbit/s) ATM
TMU ATM Links Physical Interface
(155 Mbit/s) ATM Link UTOPIA
(155 Mbit/s)
TMU 6x
ATM Links MUX
(25 Mbit/s) OC3 Link
TMU SONET
ATM (155 Mbit/s)
TMU Switch
6x
ATM Links MUX
(25 Mbit/s) Optical
Traffic interface
Management ATM SW
411-1075A-001.1603
3-20 November, 2006
FOR TRAINING PURPOSES ONLY
3-20
ATM Switch Module
2 - ATM Switching Principle
Port 1 Port 2
Port 3
2 9
Switching Table
Input Output
Port VPI VCI Port VPI VCI
1 1 8 2 4 5
VC/VP ATM Switch: Input(Port, VPI, VCI) ® Output(Port, VPI, VCI)
1 6 4 3 2 9 VP ATM Switch: Input(Port, VPI) ® Output(Port; VPI)
.
.
.
411-1075A-001.1603
3-21 November, 2006
FOR TRAINING PURPOSES ONLY
ATM switching consists first in establishing a virtual circuit for each communication
using a virtual channel or VC and a virtual path or VP.
These virtual circuits are established statically according to engineering rules, they
are Permanent Virtual Circuits or PVCs.
The main function of an ATM switch is to receive cells on a port and to switch
those cells to the proper output port based on the VPI and VCI values of the cell.
This switching is controlled by a switching table that maps input ports to output
ports based on the values of the VPI and VCI fields.
While the cells are switched through the switching fabric, their header values are
also translated from the incoming value to the outgoing value.
Addressing tables converting between VP/VC and slot number are loaded from
ATM SW module at startup time and stored in the flash EPROM of the ATM part
of all modules:
• AAL1 routing tables are dynamic,
• AAL5 routing tables are static.
3-21
BSC 3000: Interface Node
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 2 3
LSA LSA LSA
- Filler -
- Filler -
- Filler -
Shelf 1 RC RC RC
SIM A
ATM RM
ATM RM
5 0 4
Shelf 0 LSA LSA LSA
- Filler -
8k RM 0
8k RM 1
RC RC RC
CEM 0
CEM 1
SIM B
Interface
Node
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Mandatory
for
synchronization
411-1075A-001.1603
3-22 November, 2006
FOR TRAINING PURPOSES ONLY
The Interface Node is connected to the Control Node by four optical fiber cables
with a standard ATM interface.
There are four major hardware modules that make up the Interface Node:
• the Common Equipment Module (or CEM),
• the 8K subrate matrix Resource Module (or 8K RM),
• the Low Speed Access Resource Complex module (or LSA RC),
• the Asynchronous Transfer Mode Resource Module (or ATM RM).
The maximum configuration for the Interface Node is the following:
• six LSA RC modules,
• two ATM RM modules,
• two 8K-RM modules,
• two CEM modules,
• two SIM modules.
The CEMs have special slots (slots 7 and 8, in shelf 0), and both are always
provisioned.
The LSA-RC module 0 is mandatory, as the Interface Node is synchronized
through the PCMs of this slot (synchronizing PCMs 0-1-2-3-4-5).
3-22
Interface Node
1 - General Architecture Control
Node
Interface
Node
ATM RM
ATM/Slink
Interface
S-links
Switching Unit
Ater LSA RC
TCU CEM 8K RM
Agprs
PCUSN PCM
Controller
Abis S-links
BTS 64 kbit/s 8 kbit/s
411-1075A-001.1603
3-23 November, 2006
FOR TRAINING PURPOSES ONLY
3-23
Interface Node
2 - Detailed Architecture
Control Node Control Node
Switching Unit
IMC links
CEM CEM
8K RM 8K RM
64 kbit/s 64 kbit/s
8 kbit/s 8 kbit/s
Active DS512 Passive
Passive Active DS512
LSA RC LSA RC
BTS, S-links BTS,
TCU, PCM PCM
TCU,
PCUSN Controller Controller PCUSN
411-1075A-001.1603
3-24 November, 2006
FOR TRAINING PURPOSES ONLY
3-24
ATM RM
Logical Architecture
Interface
Node
ATM RM
Convergence Segmentation
Sublayer And
Reassembly ATM/Slink
sublayer Interface
S-links
PCM DS0 AAL1
mapper (LAPD, SS7)
Physical
ATM layer Control
layer OC-3 Node
S-links Interface
ATM Links
PCM SPM AAL5
messaging (OA&M, CallP) (155 Mbit/s)
411-1075A-001.1603
3-25 November, 2006
FOR TRAINING PURPOSES ONLY
3-25
Switching Unit
1 - Common Equipment Module
Interface
CEM
Switch Manager Node
(Call Processing)
8K Integrated
Connection 8K RM
Manager
64K
Connection 8 kbit/s
ATM RM Manager
ATM/Slink
Interface
Interface
Node
OA&M
LSA RC
LSA RC
Switching
Matrix 64K PCM
PCM
Controller
Controller PCM Clock
411-1075A-001.1603
3-26 November, 2006
FOR TRAINING PURPOSES ONLY
The Common Equipment Module is the main module of the Interface Node.
The CEM handles the following functions:
• channel connection management, (traffic switching),
• controls the Resource Modules of the Interface Node (downloading, testing,
configuring),
• provides system maintenance, using the TML,
• clock synchronization,
• alarm processing.
The main function of the Switch Manager, is to establish, release and modify
Abis/Ater connections in the Switching Matrix (switch fabric), under the control of
Call Processing (TMU).
Its other function is to establish 64 kbps connections for signaling links.
The switch fabrics are updated on both CEMs to ensure consistency between
them.
The CEM is provisioned in a 1+1 hot stand-by redundancy scheme.
One CEM is active, i.e. actually performing Call Processing functions, while the
other is inactive, ready to take over if the active module fails.
The messages between the IN-OA&M application (OMU) and the CEM are
exchanged using the IP protocol over AAL5 ATM circuits. The IN O&M application
handles only IP addresses and TCP/UDP ports.
3-26
Switching Unit
2 - Common Equipment Module and 8K RM
CEM
Switching Unit
LSA RC
CEM Primary 8K RM
PCM
Controller
64 kbit/s 8 kbit/s
LSA RC
PCM
Controller
411-1075A-001.1603
3-27 November, 2006
FOR TRAINING PURPOSES ONLY
The switching unit manages all the flow of connections sent by the Call Processing
and BTS OA&M applications from the Control Node (TMU).
The Integrated Control Manager or ICM software of the CEM is responsible for
establishing connections between bearer channels, using a two stage matrix.
The switching unit is composed of two types of module:
• the Common Equipment Module or CEM offers a 64 kbps matrix (switch fabric)
only capable of switching at TS (DS0) level,
• the 8K RM is a subrate matrix Resource Module which provides a secondary
stage of switching individuals bits within each TS.
Internal dialog between CEM and other modules (LSA RC and 8K RM) is carried
out by reserved TSs (30 to 40) of the Primary S-link.
3-27
Switching Unit
3 - 8K-RM (SRT)
8K RM
Messaging
Channel
Interface
Sequencer
CEM Clock
Passive
CEM Primary
Active
64 kbit/s
S-link
Interf.
411-1075A-001.1603
3-28 November, 2006
FOR TRAINING PURPOSES ONLY
The 8K RM is used for the bearer channels that have to be switched by the
Interface Node, between the Abis interface and the Ater interface.
The 8K RM or Subrate Matrix is a 4096 bit-to-bit switch, which can communicate
with the two CEMs via:
• nine S-links, connected to the backplane.
The 8K RM is provisioned in a 1+1 (active/active) redundancy scheme.
The active CEM module controls the switching activity of the two 8K RM modules,
using the 36 reserved TSs of the Primary S-link:
• switch messaging (30 TS),
• synchronization (6 TS).
The S-link Interface extracts messaging for communication with CEMs and
generates the reference clock.
The Channel Sequencer performs rate adaptation and channel selection.
The Switching Matrix performs channel switching at an 8 kHz frame rate, using an
eight-bit matrix, working in parallel. The fanout is limited to 2268 Time Slots
(payload).
3-28
Switching Unit Internal Switching Connections
CEM S-Links
4 - DS512 8K RM
New
DS512
0 1
External 1
Switching Connections
411-1075A-001.1603
3-29 November, 2006
FOR TRAINING PURPOSES ONLY
In V14., the BSC 3000 can switch up to 2268 DS0 on Abis, Agprs and Ater
Interfaces.
With the introduction of EDGE, this switching capacity needs to be increased, in
order not to become a limiting factor.
To increase the BSC 3000 switching capacity, four DS512 links (optical fibers) are
established, between CEM and 8K-RM module.
With this connection, the BSC 3000 DS0 capacity increases from 2268 DS up to
4056 DS0.
3-29
Switching Unit
5 - Internal S-Link Connection
9 S-links = 256 x 9 = 2304 Time Slots
Control Interface Node
Node CEM
ATM RM S-link 8K RM
ATM/Slink S-link
Interface 9 S-links
S-link
8 kbit/s
64 kbit/s
411-1075A-001.1603
3-30 November, 2006
FOR TRAINING PURPOSES ONLY
As the 8K RM needs nine S-links to the CEM it has a fixed position into the
Interface Node shelf no. 0.
Whereas the LSA-RC and ATM RM only need three S-links for the back panel
connection.
Each S-link provides 256 Time Slots.
3-30
Switching Unit
6 - Switching LAPD and SS7 Time Slots
Interface
Node
CEM
TSc1 LSA RC
ATM RM
TSa1 TSc1 PCM
ATM/Slink TSb1 Controller
Interface
TSa1 LSA RC
PCM
ATM RM 64 kbit/s
Controller
TSa2 TSc2
ATM/Slink
Interface TSb2
TSb1 LSA RC
PCM
Controller
411-1075A-001.1603
3-31 November, 2006
FOR TRAINING PURPOSES ONLY
LAPD and SS7 messages arriving in AAL1 cells on both ATM modules are carried
on two separate S-links to the CEMs.
A Y-connection connects the two identical TSs to the required LSA module:
• in the ATM RM to LSA-RC direction, only the TS of the active plane is
switched,
• in the LSA-RC to ATM RM direction, the TS is broadcast to both S-links.
S-links used for signaling are called Primary S-links.
3-31
Low Speed Access Resource Complex
1 - Functions
LSA RC
IEM
CEM
Transcoding TIM
PCM
Mapper NRZ
Framer HDB3
64 kbit/s /B8ZS
HDLC Passive
Controller
S-links IEM PCM
Selection (E1 or T1)
CEM IEM
Transcoding
PCM
64 kbit/s Mapper NRZ
Framer HDB3
/B8ZS
HDLC Active
Controller
411-1075A-001.1603
3-32 November, 2006
FOR TRAINING PURPOSES ONLY
The Low Speed Access Resource Complex or LSA-RC is used to interface the
BSC to the TCU, the PCU and the BTS.
The LSA-RC is the PCM interface module.
It is called “Resource Complex”, as it is made of three modules (taking three
slots):
• two Interface Electronic Modules (or IEM), they are in 1+1 hot stand-by
redundancy (field replaceable without service disruption),
• one Terminal Interface Module (or TIM), it is a passive switch that switches the
PCM towards the active IEM. The TIM does not contain electronic components
(very high MTBF) and provides LSA internal redundancy.
Main IEM functions:
• the S-Link Mapper is responsible for transferring payload data between the
channels on the S-Link interface and the respective channels of the
PCM30/DS1 Link interface,
• transcoding converts signals from NRZ to HDB3 (or B8ZS),
• the HDLC controller is used for LAPD level 2 treatment (only used in the TCU).
The BSC Interface Node can contain up to six LSA-RC modules to provide 126
PCM30 or 168 DS-1.
3-32
Low Speed Access Resource Complex
2 - Physical Architecture
RC Mini Spectrum
Backplane Backplane
PCM
To
CTMx
TIM (CTU)
Backplane
PCM
IEM
PCM
411-1075A-001.1603
3-33 November, 2006
FOR TRAINING PURPOSES ONLY
3-33
Low Speed Access Resource Complex
3 - LSA RC Front Panel
The red indicators indicates a fault condition on the span in the PCM (span) display:
Loss Of Signal
Alarm Indication Signal
Loss of Frame Alignment – (Loss Of Frame Alignment)
Remote Alarm Indication
PCM Number with problem and type of problem
To search the next PCM in fault
411-1075A-001.1603
3-34 November, 2006
FOR TRAINING PURPOSES ONLY
3-34
Low Speed Access Resource Complex
4 - External Connection
IEM
Rx cables
Passive
411-1075A-001.1603
3-35 November, 2006
FOR TRAINING PURPOSES ONLY
The SAI is a cabinet, attached to the BSC frame, enabling front access to the
PCM cabling. It can host up to six CTUs (plus two optional HUBs) in the BSCSAI
and eight CTUs in the TCU SAI.
3-35
Low Speed Access Resource Complex
5 - IEM / IEM2
LSA RC module
IEM RCM
IEM2
IEM2 RCM
Passive
Active TIM
IEM2
Passive
411-1075A-001.1603
3-36 November, 2006
FOR TRAINING PURPOSES ONLY
The Interface Electronics Module (IEM) is a component of the Low Speed Access
(LSA) module. It is the electronic interface for E1 or T1 PCMs. Two IEM instances
are associated to each LSA, duplicated in a 1+1 protection scheme.
LSA modules are located either in BSC 3000 Interface Node or in TCU 3000.
This IEM2 evolution is part of the normal life cycle management of the BSC/TCU
3000 H/W modules.
Mixed configurations of IEM1 and IEM2 modules are allowed on the same shelf.
Therefore, an LSA may be equipped with two IEM1 modules, with two IEM2
modules, or with one IEM1 module and one IEM2 module.
3-36
TCU 3000: Transcoding Node
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 2 3
Shelf 1 LSA T T LSA LSA T S
RC RC RC
Filler
Filler
R R R I
Transcoding M M M M
Node
Shelf 0
0 C C
T T T LSA E E T T T T T T S
RC M M R R R R R R I
R R R
M M M M M M M M M M
Transcoding
Node
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Mandatory
for
synchronization
411-1075A-001.1603
3-37 November, 2006
FOR TRAINING PURPOSES ONLY
The TCU 3000 is based on the Spectrum architecture, as is the Interface Node of
the BSC 3000.
One TCU 3000 cabinet consists in:
• two independent Transcoding Nodes (one sub-rack),
• one cabling interface area or SAI, which provides front access to the PCM
cabling.
Each sub-rack supports twenty-eight modules (or slices) and two power interface
modules or SIMs.
The TCU 3000 uses the last PCM ports from LSA logical No. 0 (the LSA in slots
[4,5,6] shelf 0) as synchronizing PCMs.
By default the synchronizing ports are:
• No. 15, 16, 17, 18, 19, 20 for E1 PCMS,
• No. 22, 23, 24, 25, 26, 27 for T1 PCMs.
3-37
TCU 3000 Transcoding Node
Transcoding
TRM TRM Node
Up to 12
TRMs
Vocoders Vocoders
S-links
CEM CEM
Passive Active
64 kbit/s 64 kbit/s
IMC
Up to 4 links
LSA-RC modules
411-1075A-001.1603
3-38 November, 2006
FOR TRAINING PURPOSES ONLY
The Transcoding Node (or TCU) is composed of a controller (CEM) and a set of
Resource Modules (RM) connected point-to-point to the CEM via S-links, through
the backpanel.
3-38
Common Equipment Module
1 - Signaling Processing
Transcoding
Node
Ater A
interface interface
LSA RC CEM LSA RC
SS7 TS
SS7 TS
PCM 64 kbit/s PCM MSC
BSC
LAPD Controller Controller
PCM links
PCM links
HDLC Call HDLC
Controller Processing Controller
411-1075A-001.1603
3-39 November, 2006
FOR TRAINING PURPOSES ONLY
LAPD links established between the TCU and the BSC (on the Ater) are used for
both OA&M and Call Processing functions located on the CEM:
• OA&M: management of the TCU under the control of the BSC:
— downloading and configuration, from BSC local disk,
— supervision: event reports are sent to the OMC-R through BSC.
• Call Processing: specific treatments performed by the TCU for each call, are
initiated by the BSC:
— choice of the voice algorithm,
— Ater and A Time Slots to be used.
LAPD links are:
• switched by the switching matrix of the CEM, coming from ATM-SW,
• processed by the HDLC Controller (up to four links) located on an LSA-RC
module,
• carried on the Ater PCM TSs:
— Call Processing: one TS per LSA-RC module,
— O&M: one TS per TCU node.
SS7 Time Slots are simply switched through the switching matrix without
transcoding process.
3-39
Common Equipment Module
2 - Information Switching and Processing
Transcoding
Node
64 kbit/s
4321 4321
MSC
411-1075A-001.1603
3-40 November, 2006
FOR TRAINING PURPOSES ONLY
3-40
Transcoder Resource Module
TRM
SPU
Frame synchronization
Archipelago = 3 Islands Handovers ….
411-1075A-001.1603
3-41 November, 2006
FOR TRAINING PURPOSES ONLY
The Transcoder Resource Module or TRM, performs the GSM transcoding function. The
TRM supports 216 vocoders:
• Full Rate, Enhanced Full Rate (EFR) and AMR, voice coding/decoding,
• up to 14.4 kbit/s data rate.
A TRM contains one Processor (Motorola Power QUICC) and 45 DSPs (Motorola DSP
311), organized in three identical archipelagos, each of which can be assigned
dynamically to a particular type of vocoder: FR, EFR, and AMR (from V14).
Each archipelago is made of one MaiL Box DSP and three DSP islands.
Each island consists of five DSPs:
• 1 PPU (Pre-Processing Unit) DSP managing frame synchronization, handovers, etc.,
• 4 SPU (Signal Processing Unit) DSPs managing the vocoding (six vocoders).
The TRM is provisioned in an N+1 load sharing redundancy scheme.
A TCU 3000 sub-rack (Transcoding Node) can contain up to 12 TRM modules.
The allocation of the vocoders, based on a dynamic process, is the result of a real-time
adjustment, starting at the initialization of the TCU.
When there are two or more types of vocoder to manage, the operator has to define for
each TCU 3000 node the minimum capacity associated with each type of vocoder, in
terms of number of communications to process.
During this process, the TCU may have to modify the initial partitioning, in order to satisfy
a larger number of requests than planned for a specific coder.
If the operator wants the TCU 3000 to perform dynamic resource allocation, he needs to
configure the minimum required capacity for each vocoder so as to leave some
transcoding resources in the “free pool”.
3-41
Transcoder Resource Module
2 - TRM2 Introduction
SLIFS TRM2
FLASH
SDRAM
CTRL POWER
BILL SPU SPU SPU SPU
QUICC PPU
MLB PPU
TDM BUS
ARCHIPELAGO 1
ARCHIPELAGO 2
ARCHIPELAGO 3
common archipelago
411-1075A-001.1603
3-42 November, 2006
FOR TRAINING PURPOSES ONLY
TRM2 HW presentation:
• The TRM2 board is composed of 3 archipelagoes. Each one will be dedicated
to a codec type (FR, EFR, AMR, EFR_TTY)
• NTQE08BA for TRM2
3-42
Internal PCM S-Link Allocation
Transcoding
TRM S-link CEM Node
Vocoders S-link
S-link LSA RC
S-link PCM
S-link
Controller
64 kbit/s S-link
S-link
LSA RC
PCM
BSC Controller
411-1075A-001.1603
3-43 November, 2006
FOR TRAINING PURPOSES ONLY
3-43
Maintenance Trunk Module Bus
411-1075A-001.1603
3-44 November, 2006
FOR TRAINING PURPOSES ONLY
3-44
Shelf Interface Module
Power Distribution
Shelf
Back Module
SIM A plane
PCIU
A Feed
+3.3 V
PUPS
1
SIM B
0 Module
B Feed PUPS
+3.3 V
411-1075A-001.1603
3-45 November, 2006
FOR TRAINING PURPOSES ONLY
Each shelf has two Shelf Interface Modules, but one can supply all the modules
(28).
The two SIMs provide for the shelf:
• power supply (-48 V) EMI filtered,
• power switching (30 A) with soft start circuitry,
• CEM/PCIU alarm interfaces,
• craftsperson access.
In the case where a SIM module needs to be extracted (repair or upgrade), it is
necessary to switch off the module and to disconnect the power feed on the
faceplate.
3-45
Service Area Interface
1 - Overview
Service
Area
Interface
LSA RC module
CTU
Cable Transition Unit RC Mini backplane
CTU CTB
CTMx7 IEM IEM
CTU CTMx6
CTMx5
Active Passive
CTU CTMx4
CTMx3
CTU CTMx2
CTMx1
CTU
TIM
CTU Tx cables
Rx cables
CTU
411-1075A-001.1603
3-46 November, 2006
FOR TRAINING PURPOSES ONLY
The Service Area Interface comprises seven CTU (Cable Termination Unit) modules
which provide the physical interface between the LSA RC modules and the customer’s
spans.
Each CTU is associated with one LSA RC module and includes:
• one CTB (Cable Transition Board), equipped to mate the backplane with seven
CTMx,
• seven CTMx (Cable Transition Module) which provide the following functions:
— terminate the cables that connect to the TIM board (LSA-RC) via CTB,
— provides connectors for terminating customer A and Ater PCMs,
— provide secondary surge protection, manual loopback switches, and passive
electronics for impedance matching for PCM30 Coax connections.
The CTMx is available in three styles:
• CTMC (PCM30 Coax) which provides three E1 PCMs,
• CTMP (PCM30 twisted Pair) which provides three E1 PCMs,
• CTMD (DS-1 twisted pair) which provides four T1 PCMs.
The CTU numbering, and the linking between the LSA and the CTU must respect the
following principles:
• The operator must easily find the CTU corresponding to a LSA, in order to connect the
LSA to the CTU.
• The operator must find the CTU associated to a LSA when the LSA is displaying span
error on its faceplate (connection/loopback operation of the corresponding CTM).
3-46
Service Area Interface
2 - CTU Connection
18 19 20 24 25 26 27
15 16 17 20 21 22 23
12 13 14 16 17 18 19
9 10 11 12 13 14 15
6 7 8 8 9 10 11
3 4 5 4 5 6 7
0 1 2 0 1 2 3
411-1075A-001.1603
3-47 November, 2006
FOR TRAINING PURPOSES ONLY
For Ater PCMs, the connection is updated in the lsaPcmList parameter of LSA-RC
object at OMC-R.
LSA-RC number
3-47
Service Area Interface
3 - BSC
SAI
1 2 3
LSA LSA LSA
ATM RM
ATM RM
RC RC RC
CTU#0
CTU#1
Interface
Node
CTU#2 5 0 4
LSA LSA LSA
CEM 0
CEM 1
8k RM
8k RM
RC RC RC
CTU#3
CTU#4
CTU#5
411-1075A-001.1603
3-48 November, 2006
FOR TRAINING PURPOSES ONLY
3-48
Service Area Interface
4 - TCU
SAI 1 2 3
LSA LSA LSA
RC RC RC
CTU#0
Upper
Transcoding
CTU#1 Node
0 CC
LSA
CTU#2 RC
EE
MM
CTU#3 01
CTU#4 1 2 3
LSA LSA LSA
RC RC RC
CTU#5
CTU#6 Lower
Transcoding
0 CC Node
LSA
RC E
CTU#7 E
MM
01
411-1075A-001.1603
3-49 November, 2006
FOR TRAINING PURPOSES ONLY
3-49
Student notes:
3-50
Data Flow Exercises
Section 4
nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY
4-1
Objectives
411-1075A-001.1603
4-2 November, 2006
FOR TRAINING PURPOSES ONLY
4-2
Contents
411-1075A-001.1603
4-3 November, 2006
FOR TRAINING PURPOSES ONLY
4-3
Internal BSC Dialogues
Control Node
OAM OMU
OAM OMU MMS TMU TMU TMU
MMS
ATM SW ATM SW
Traffic Traffic Traffic
Management Management Management
Interface Node
Switching Unit
ATM RM ATM RM
CEM
ATM/PCM ATM/PCM
Interface 64 kb/s Interface
LSARC LSARC
LSARC LSARC
8K RM
To To
PCM PCM
BTSs Controller TCUs
Controller 8 kb/s
411-1075A-001.1603
4-4 November, 2006
FOR TRAINING PURPOSES ONLY
On the block diagram of the Control and Interface Nodes, trace the path for
internal messaging:
• between TMUs,
• between the OMU and the CEM.
4-4
Circuit Switch/Packet Switch Path
TMU TMU
Traffic Traffic
Management Management
OAM OMU
PCU
ATM SW ATM SW
Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
411-1075A-001.1603
4-5 November, 2006
FOR TRAINING PURPOSES ONLY
On the block diagram of the BSC and TCU, trace the path for circuit switch traffic
and packet switch communication.
4-5
GSM Signaling Path
TMU TMU
Traffic Traffic
Management Management
OAM OMU
ATM SW ATM SW
Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
411-1075A-001.1603
4-6 November, 2006
FOR TRAINING PURPOSES ONLY
On the block diagram of the BSC and TCU, trace the path for BTS/LAPD and
MSC/SS7 signaling.
4-6
BSC 3000 and TCU 3000 Dialogue
TMU TMU
Traffic Traffic
Management Management
OAM OMU
ATM SW ATM SW
Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
411-1075A-001.1603
4-7 November, 2006
FOR TRAINING PURPOSES ONLY
On the block diagram of the BSC and TCU, trace the path for Call Processing
dialog and Operation and Maintenance between the BSC and the TCU.
4-7
Student notes:
4-8
BSC 3000 and TCU 3000
Operation
Section 5
nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY
5-1
Objectives
411-1075A-001.1603
5-2 November, 2006
FOR TRAINING PURPOSES ONLY
5-2
Contents
411-1075A-001.1603
5-3 November, 2006
FOR TRAINING PURPOSES ONLY
5-3
Operation and Maintenance
Overview
RACE
TML
The BSC 3000 takes advantage of its high processing power, to perform many
O&M tasks in parallel: for example it takes in charge the software upgrade of all its
BTSs, once it gets the full software loaded from the OMC-R.
It can download the software of up to 100 TRXs simultaneously, hence decreasing
considerably the upgrade duration or the necessary time to bring back into service
the whole BSS network after a cold restart.
The hardware and software architecture of the BSC 3000 and TCU 3000 (one-to-
one links between hardware modules, supervision software, supervision activity of
passive modules) allow precise and immediate fault detection (both hardware and
software failures).
The simplicity of the hardware architecture allows the BSC to detect very precisely
any hardware fault at a module level.
Each hardware module is replaceable unit and is hot-insertion: when it has been
detected as faulty, it can be replaced without stopping the BSC or the TCU and
the new module will be automatically configured and put into service by the BSC.
5-4
Object Model at the OMC-R
1 - OMC-R/BSC Interface
Old New
BSC
BSC
411-1075A-001.1603
5-5 November, 2006
FOR TRAINING PURPOSES ONLY
The object model will converge towards the Q.3 object model of the OMC-R; in
this way, the Q.3 mediation done in the OMC-R will become easier and more
effective.
Managed object modeling (list of objects, and their associated attributes, actions,
notifications and counters), is equivalent to the one proposed on the Q.3 interface
of the OMC-R Mediation Device.
Main benefits:
• less mediation
— average mediation rate 4% instead of 55% network vision uniformity,
• single stream OA&M:
— design cost reduction OMC-R CM,
— BSC OA&M,
• hardware management:
— clear board identification, board restart, test triggering.
5-5
Object Model at the OMC-R
2 - bsc and transcoder Objects
bsc
Automatically transcoder
triggered
modules
Lsa* cem trm
cn in
iem
boards
mms iem
* = manually updated
411-1075A-001.1603
5-6 November, 2006
FOR TRAINING PURPOSES ONLY
New hardware objects are introduced into the OMC-R BSS Q.3 object model for
each type of board or module to be managed in a BSC 3000 and TCU 3000.
These objects will be used by the different OMC-R applications (configuration,
fault, performance), exactly like the other Q.3 objects. For example, a fault related
to a hardware module will be notified directly on the corresponding hardware
object.
These hardware objects will be made visible both in the “internal” Q.3 interface
(MD/OMC-R) and in the “external” one (MD/NMS).
The main objects are triggered automatically: bsc3GEqpt, cn and in.
The LSA-RC shall be created manually by the operator at a specific position in the
shelf (configuration data of the LSA-RC object).
This creation results in the creation of the Resource Complex Management and
the TIM:
• the IEMs follow standard plug & play module management,
• the TIM is always in the central position (x position),
• the two redundant IEM modules always surround the TIM (x-1 and x+1
position) at the OMC-R level.
All board objects are created automatically.
5-6
Object Model at the OMC-R
3 - BSC 3000 Control
BSC 3000
411-1075A-001.1603
5-7 November, 2006
FOR TRAINING PURPOSES ONLY
All hardware modules of the BSC 3000 & TCU 3000 are modeled and managed
as logical objects. This allows both the BSC 3000 and the OMC-R to provide the
operator with precise information and services on each individual hardware
module:
• Board representation on the OMC-R GUI: The physical BSC board layout will
be represented in the OMC-R GUI.
• Fault representation: A hardware problem can be tracked thanks to this new
representation which allows faulty boards to be highlighted on the OMC-R GUI.
• Private Data collection: Dynamic data can be collected per boards to give the
operator specific information related to the boards/modules (Localization,
Firmware identification, Inventory information).
• Maintenance actions: Actions can be performed for some boards/modules in
order to prevent or to correct hardware problems (RESET BOARDS) or to
trigger tests from the OMC-R.
• Performance measurement: New localization will be performed on the Q.3 and
BSC/OMC interface which will significantly reduce the number of counters
defined in the Q.3 interface. Thus, access to the observation report will be
simplified.
5-7
Object Model at the OMC-R
4 - TCU 3000 Control
TCU 3000
411-1075A-001.1603
5-8 November, 2006
FOR TRAINING PURPOSES ONLY
Graphical view of a TCU 3000, with easy fault localization, due to module
representation.
5-8
Software Architecture
1 - Software Layers
Platform Layer
(Supervision, Startup, Load Balancing, Messaging)
Base OS Layer
(Memory and disk access)
Hardware/Firmware
411-1075A-001.1603
5-9 November, 2006
FOR TRAINING PURPOSES ONLY
5-9
Software Architecture
2 - BSC Software Architecture
Control
OMU PCUSN TMU PCUSN TMU
OA&M OA&M Node
GSM TCU BTS GSM TCU BTS GSM
OA&M OA&M OA&M Call P. OA&M OA&M Call P.
Platform Platform Platform
Base OS AIX Base OS Base OS
VxWorks VxWorks
Interface
CEM
Node
IN Switch
OA&M Mgt.
SAPI/Base
Base OS VRTX
LSA RC 8K RM ATM RM
RM Switch RM ATM RM
OA&M Mgt. OA&M Mgt. OA&M
Base Base SAPI/Base
Base OS VRTX Base OS VRTX Base OS VRTX
411-1075A-001.1603
5-10 November, 2006
FOR TRAINING PURPOSES ONLY
The BSC 3000 Control Node software is divided in two main areas:
• A "TMN front-end" area composed of the two OMUs: the software is composed of
centralized functions (OMC-R interface management, Data Base management, etc.)
possibly duplicated in passive mode on the mate OMU.
• A "Traffic Management" area composed of the TMUs: the architecture is based on a
scalability policy. This means that a BSC can be equipped with only one TMU with
extension capability when more TMUs are provisioned (total = up to 14 TMUs). This
implies a distributed software architecture to share the processing load over all the
TMUs. In this way, the "GSM application" and the "Platform" layers are designed as
distributed software.
The distribution criteria are closely linked to the managed objects:
• the software in relation with the TCU should preferably be distributed per TCU
equipment,
• the software in relation with the PCUSN should preferably be distributed per PCUSN
equipment,
• the software in relation with the BTS objects (BCF, TRX, TDMA, etc.) should
preferably be distributed per BTS site,
• the software in relation with the MSC should be distributed only on software
architecture criteria. In fact, the A interface objects are viewed as unmarked resources
from the MSC point of view.
To achieve these goals, the "Load Balancing" and "Fault Tolerance" services provide
respectively the capability to distribute the application entities over all the provisioned
TMUs and the capability to protect the system from (or at least to reduce) the impact of
failures.
5-10
Software Downloading
1 - BSC Downloading from the OMC-R
OMC-R
BSC http://jjj.kk.lll... html
TMU ATM SW
TML
Traffic
Management
OAM OMU
MMS
(Disk) EFT Downloading
FTAM
The BSS software (BSC, TCU and BTS) is downloaded into the BSC from the
OMC-R. For each version and edition, the complete BSS software is delivered on
a CDROM. This volume can be used at the OMC or TML level.
It is compressed and divided into several files in order to download only the
modified files between two versions and to reduce as much as possible the
downloading duration.
The BSC 3000 stores two versions of the BSS software. The new version will be
downloaded in background without impacting BSC service.
Both BSS software and BSC OS can be downloaded in background, or installed
locally from the TML.
There is no PROM memory on the BSC 3000 & TCU 3000 hardware module, with
the exception of the ATM SW module (ATM switch).
All firmware is in flash EPROM and can be modified and downloaded remotely by
the system.
The complete BSS software (BSC, TCU and BTS) is downloaded from the OMC-R
to the BSC via FTAM.
The OMC and BSC 3000 are connected through Ethernet and IP protocols.
The throughput is up to 10/100 Mbit/s (Ethernet standard) if the OMC-R is locally
connected to the BSC.
When the BSC is remote, a minimum throughput of 128 kbps is necessary for the
efficiency of OMC-BSC communication.
5-11
Software Downloading
2 - BTS and TCU Downloading
BSC
IN ATM RM
BSC Disk 8K-RM
LSA-RC
CN
ATM SW TCU
TMU LSA-RC
TRM
BSS Active
Software OMU Passive
OMU
OA&M
BTS
BCF
TRX
411-1075A-001.1603
5-12 November, 2006
FOR TRAINING PURPOSES ONLY
BTS downloading
The BSC can download ten BTSs simultaneously per TMU.
With ten “active” TMUs, 100 BTSs can be downloaded simultaneously.
BSC 3000 support BTS Background Downloading since V16.
TCU downloading
TCU 3000 software is downloaded by the BSC 3000. It is compressed and divided
into several files, in order to download only the modified files between two
versions and to reduce the downloading duration as much as possible.
The BSC 3000 stores two versions of TCU software.
The new version can be downloaded as a background task, without impacting
TCU service.
The TCU software can also be installed locally from the TML.
A set of LAPD connections is used for TCU management in normal operation.
To download the TCU, supplementary LAPD connections must be setup.
These connections pre-empt (or wait for) time-slots used for communications.
A minimum of four LAPD channels can be managed per LSA module.
Download of a set of files (size of about 20 Mbytes per TCU) lasts:
• with four LAPDs: about 20 minutes (requires a minimum of 2 LSAs),
• with eight LAPDs: about 10 minutes (requires a minimum of 3 LSAs).
5-12
Startup
1 - BSC or TCU Cold Startup (MIB not built)
BSC
Control Node
Module
Module Recovery
Board Recovery
411-1075A-001.1603
5-13 November, 2006
FOR TRAINING PURPOSES ONLY
The overall startup sequence describes how the BSC goes from its initial power-
up state, with no software running, to a fully operational state where the
applications are running and providing GSM service.
This type of startup is called dead office recovery and first needs the entire Control
Node startup sequence to be performed.
The operator builds the network at OMC-R level and creates the BSC logical
object.
As soon as the OMC-R/BSC link is established, the BSC sends a notification
indicating that a MIB build is requested.
Upon receipt of this notification, the OMC-R triggers the MIB build phase:
• The MIB (Management Information Base) is built on the active OMU.
• The “Build BDA N+1” upgrade feature is provided on the BSC 3000, as in a
BSC 2G.
• This phase ends with the creation of the MIB logical objects followed by the
reception of a report build message.
5-13
Startup
2 - Board Startup: General Behavior
Non FT Fault
Base OS Applications
and Tolerant
FT creators Applications
411-1075A-001.1603
5-14 November, 2006
FOR TRAINING PURPOSES ONLY
5-14
Startup
3 - BSC or TCU Hot Startup (MIB built)
• Boards
– Active OMU_SBC
– Passive OMU_SBC
– OMU_TM / TMU_TM
– TMU_SBC
– TMU_PMC
– ATM SW
• Modules
– OMU
– TMU
– ATM SW
• C-Node Startup
• I-Node Startup
• T-Node Startup
411-1075A-001.1603
5-15 November, 2006
FOR TRAINING PURPOSES ONLY
Since the MIB is already built, we only have to check the hardware configuration
consistency.
We must check that modules have not been introduced or removed when the BSC
or the TCU was previously switched off.
The BSC and TCU will have the same behavior as for a cold startup.
The consistency between the new and the previous hardware configuration is
checked out at the OMC-R level.
Three cases may happen:
• A module has been extracted: the corresponding object is deleted on the MMI
and in the MIB, and an alarm on the father object indicates the suppression.
• A module has been plugged into a previously-free slot: the corresponding
object is automatically created on the MMI and in the MIB, and an alarm on the
father object indicates the creation.
• A module has been replaced by another one:
— The object corresponding to the replaced module is deleted on the MMI
and in the MIB.
— The object corresponding to the newly inserted module is created on the
MMI and in the MIB.
— Alarms on the father object indicate the suppression and the creation.
5-15
Student notes:
5-16
BSC 3000 and TCU 3000
Maintenance and Enhanced
Exploitability
Section 6
nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY
6-1
Objectives
411-1075A-001.1603
6-2 November, 2006
FOR TRAINING PURPOSES ONLY
6-2
Contents
411-1075A-001.1603
6-3 November, 2006
FOR TRAINING PURPOSES ONLY
6-3
New Exploitability Principles
1 - Redundancy
BSC Defense
TMU TMU
TMU
TMU +
OMU A OMU B ATM SW ATM SW
A B
411-1075A-001.1603
6-4 November, 2006
FOR TRAINING PURPOSES ONLY
The BSC 3000 and TCU 3000 provide carrier-grade availability. All hardware
modules are totally redundant, including PCM interface modules.
But unlike the current BSC 12000, total duplication of all critical BSC hardware is
not required, and a board failure does not entail a switch over to a whole set of
passive boards.
In the BSC 3000 and TCU 3000, the modules work according to one of the
following three modes:
• in hot stand-by (active/passive) mode: OMU, CEM, IEM (LSA-RC module); a
single faulty board has no impact on the BSC or TCU and multiple faults also
have no impact, providing that one module (or IEM board) works in each pair,
8K-RM (SRT),
• in parallel (both modules are simultaneously active): ATM SW (ATM switch) +
ATM RM, shared MMS + private MMS,
• in N+P mode: TMU, TRM, the modules work in load sharing, processing both
active and passive processes, and P failures will preserve the nominal
capacity.
The Fault Tolerance algorithm implemented in the BSC Control Node allows fast
fault recovery, by reconfiguring the software activity on working modules, without
impacting service.
6-4
New Exploitability Principles
2 - Cell Group Concept
• BSC 2G
— up to 2 CPU-BIFP boards (CPUE) dedicated to the Call Processing
— cellGroup = collection of BTS managed on the same board
2 cellGroup
• BSC 3000
— up to 14 TMU modules dedicated to the Call Processing
— cellGroup = collection of BTS sites
96 cellGroup
411-1075A-001.1603
6-5 November, 2006
FOR TRAINING PURPOSES ONLY
To manage the BTS sites a new concept is introduced with the BSC 3000: the Cell
Group.
Each site (and all the cells and TRXs belonging to this site) is held by a Cell
Group.
A Cell Group (called CG) can hold several sites.
The CG entity is instantiated into an active and a passive instance, which are
located on different TMUs.
The CG is in charge of all the Call processing related to the BTSs (Supervision of
the BTS, Call processing of all the communications in these cells).
The distribution of BTS sites per Cellgroup is an internal algorithm which can be
only partially controlled by the operator and thus can be configured either:
• automatically and statically by the ADM application,
• by the operator from the OMC through an optional parameter (Number of
estimated TRX) transmitted at the site creation.
Each Cellgroup is able to manage up to 300 Erlangs.
Each TMU module is able to manage:
• an average of 300 Erlangs,
• up to 100 TRXs,
• up to 16 Cellgroups (8 actives and 8 passives).
6-5
New Exploitability Principles
3 - Cell Group Management
BSC 3000
411-1075A-001.1603
6-6 November, 2006
FOR TRAINING PURPOSES ONLY
The Cell Groups are determined at boot time by the Load Balancing function,
according to data associated with the cells:
• when a BTS is added to the BSC, it is added to an old or a new Cellgroup
thanks to the same algorithm,
• when a cell or a TRX is added to a BTS, the corresponding Cellgroup has
more load.
The redistribution of the sites into Cell Groups is a complex task, which is normally
performed by the BSC by respecting the CG dimensioning rules and CG capacity
objectives so defined:
• 54 CG per BSC,
• 10 sites maximum per CG,
• 18 CG per TMU,
• 75 TRXs maximum per CG,
• Maximum of 16 TRX per Cell and 48 TRX per site ( note linked to CG
allocation but maximum site size in V15.1)
Due to the software links complexity, a site must be placed in a CG by a BSC at
its creation, and cannot be moved to another CG after that. The only way to move
a site from one CG to another one is to delete it and then to re-create it.
Another possibility is to perform an on-line build (with complete service loss of the
whole BSC for a few minutes).
6-6
New Exploitability Principles
4 - Estimated Site Load Parameter
411-1075A-001.1603
6-7 November, 2006
FOR TRAINING PURPOSES ONLY
The BSC 3000 Load Balancing feature uses a table that predefines a cell Erlang
load from 1 to 16 TRX: ERLANG_PER_N_TRX_CELL BSC Data Config
The values in milliErlang can be modified by the customer without service
interruption (class 3).
By default, this table is filled with the Erlang B law results (2% blocking rate).
In V15.1, a possibility to set the estimated Erlang load of the site is offered with
the use of the parameter estimatedSiteLoad, which is a class 3 parameter with
object the btsSiteManager.
This parameter is used at the site creation, to define the Erlang consumption of
the new Cell Group, by setting the Erlang consumption to a different value from
the one defined by the ERLAN_PER_N_TRX_CELL table.
6-7
New Exploitability Principles
5 - Fault Tolerance, Load Balancing and Overload
>> Two
Two kinds
kinds of
of software:
software:
Î
Î Fault
FaultTolerance
Toleranceentities
entities"launched"
"launched"by
bythe
theFT
FTapplication
applicationand
andsupervised
supervisedby
byFT
FT
Î
Î non
nonFTFTentities
entities
411-1075A-001.1603
6-8 November, 2006
FOR TRAINING PURPOSES ONLY
GSM applications may be either Fault Tolerant (FT) or non Fault Tolerant (non
FT).
A Fault Tolerant application is an application that is replicated.
Load Balancing only applies to Fault Tolerant applications, it relies on the following
FT primitives to balance FT applications between TMUs:
• CREATE, to create a passive entity,
• FLUSH, to synchronize a passive entity on an active one,
• SWACT, to switch activity from an active entity to a passive one,
• KILL, to destroy a passive or an active entity.
6-8
New Exploitability Principles
6 - BSC 3000 Support BSS Based Solution
Location
Location
SMLC
SMLC Applications
Applications
Lb
Um A
BSC MSC
MSC
BTS
BTS BSC GMLC
GMLC
Abis VLR
VLR Lg
MS Lh
HLR
HLR
411-1075A-001.1603
6-9 November, 2006
FOR TRAINING PURPOSES ONLY
In the new BSS architecture, Nortel follows the 3GPP specification concerning the
Lb interface.
The Lb interface is used only for LCS application and relies on SS7.
There are two SS7 interfaces: A and Lb. The BSC has to manage the dialog with
multi distant point codes (SMLC and MSC). This requires having a SCCP and a
MTP3 layer multi SSN and multi DPC.
Each interface (A and Lb) relies on one distinct physical route from the BSC.
As the SMLC, the MSC and the BSC are part of the same SS7 network, the set of
sccp parameters should be identical for Lb or A interface.
6-9
Fault Tolerance
1 - Fault Tolerance Software
Fault Tolerance
Software
Module #1 Module #2
Module #1 Module #2
Failure
SWACT Active
Instance
411-1075A-001.1603
6-10 November, 2006
FOR TRAINING PURPOSES ONLY
6-10
Fault Tolerance
2 - Example: Swact on TMU Failure
>>The
The GSM
GSM Core
Core Process
Process is
is aa set
set of
of FT
FT applications
applications managing
managing aa set
set of
of
sites
sites (Cellgroup):
(Cellgroup):
Î
Î TMG_RAD
TMG_RADfor forradio
radioresource
resourcemanagement
management
Î
Î TMG_CNX
TMG_CNX for connectionmanagement
for connection management(setup,
(setup,release,
release,assignment,
assignment,HO)
HO)
Î
Î TMG_MES
TMG_MESfor forAAinterface
interfacemessages
messages(paging,
(paging,incoming
incomingHO)
HO)
Î
Î TMG_L1M
TMG_L1Mfor forLayer
Layer11management
management
Î
Î SPR
SPRfor
forBTS
BTSsite
sitesupervision
supervision
Î
Î SPT
SPTfor
forTCU
TCUsupervision
supervision
Î
Î TMG_RPP
TMG_RPPfor forPCUSN
PCUSNsupervision
supervision
Î
Î OBS for observations
OBS for observations
A1 P1 A1 P1 A1
P2 A2 P2 A2 P2
P3 A3 P3 A3
6-11
Load Balancing
1 - Principle
A1 P1 A1 P1
A2 P2 P2 A2
A3 P3 A3 P3
A4 P4 P4 A4
411-1075A-001.1603
6-12 November, 2006
FOR TRAINING PURPOSES ONLY
6-12
Load Balancing
2 - Example: Adding a TMU
TMU#1 TMU#2
A1 P1
A2 P2
P3 A3
A1 P1
P2 A2
P3 A3
411-1075A-001.1603
6-13 November, 2006
FOR TRAINING PURPOSES ONLY
In the system, the processor load of each TMU depends mainly on the number of
BTSs/cells/TRXs to manage, and the related amount of traffic.
When there are modifications to a BTS configuration (addition of TRX) or to a BSC
configuration (addition of TMUs) the Load Balancing service allows redistribution
of the processing with the best use of the BSC resources.
The chart gives an example of the use of Load Balancing when a TMU is added to
the BSC.
The initial configuration of the BSC is 2 TMUs, and one more is added and
provisioned for traffic management:
• the BSC automatically computes a new distribution and applies it,
• the re-distribution is achieved without exposure time by:
— adding new passive members to the groups,
— swapping their activity,
— suppressing the useless passive members.
The Load Balancing operation is achieved by using the Fault Tolerance service.
In fact, the redistribution of the processing is obtained by “electing” active
processes with the best location distribution (best applies here to taking into
account all the parameters that specify the LB criteria).
This “election” leads to several SWACTs achieved by Fault Tolerance.
6-13
Overload
1 - Principles
P R
a
g Hand A
Locat.
i Over C =
n Update CPU Load
H
g
o ry
Mem
OMU CEM
OAM TMU ATM SW
64 kbps
Traffic
Management
Dis
k
411-1075A-001.1603
6-14 November, 2006
FOR TRAINING PURPOSES ONLY
6-14
Overload
2 - TMU Mechanism
Only Traffic management operations are taken into account in this mechanism
Current communications are maintained, (except for HO incoming requests above threshold 3)
overLoad levels
50%
List of messages filtered:
• Paging request
• Channel request (non emergency)
• all first Layer 3 (non emergency)
• HO request (traffic reason)
Hysteresis is applied • HO request (O&M reason)
at each threshold. • Directed retry
Time
411-1075A-001.1603
6-15 November, 2006
FOR TRAINING PURPOSES ONLY
TMU modules are relatively independent one with respect to the other in terms of
overload handling. Since a TMU module manages the traffic of a group of cells, when a
TMU module is in overload, it will filter partially the new incoming traffic requests related
to the group of cells it manages.
Three overload levels are defined for each monitored processor.
For each level, some of the new traffic requests are filtered:
• level 1 (80% of processor load): traffic reduction by around 33% by filtering one
request out of three of the following messages:
— Paging Request,
— Channel Request (not Emergency Call),
— all first Layer 3 messages (not Emergency Call),
— handover for traffic reason,
— handover for O&M reason,
— directed retry,
• level 2 (90% of processor load): traffic reduction around 66% by filtering two requests
out of three of the above messages,
• level 3 (100% of processor load): no new traffic is accepted by filtering all previous
and following messages:
— all first layer 3 messages,
— all Channel Requests,
— all handover indications,
— all handover requests.
6-15
Fault Management
1 - Impact on Service in the Control Node
1 No impact on traffic
ATM SW
ATM SW
2 No disturbance of traffic or slight delay
- Filler -
OMU
SIM B
OMU
TMU
TMU
TMU
TMU
TMU
TMU
TMU
SIM A
- Filler -
TMU
TMU
TMU
TMU
TMU
TMU
TMU
- Filler -
- Filler -
1 3
411-1075A-001.1603
6-16 November, 2006
FOR TRAINING PURPOSES ONLY
6-16
Fault Management
2 - Impact on Service in the Interface Node
- Filler -
- Filler -
RC RC RC 2 No disturbance on traffic
SIM A
ATM RM
ATM RM
3 No more communication
4 3 4 1
RC RC RC
CEM 0
CEM 1
SIM B
1 2
411-1075A-001.1603
6-17 November, 2006
FOR TRAINING PURPOSES ONLY
6-17
Fault Management
3 - Impact on Service in the Transcoder Node
C C
E E S
T T T L SA M M T T T T T T
RRR RRRRRR I
MMM M M MM M M M
1 2
411-1075A-001.1603
6-18 November, 2006
FOR TRAINING PURPOSES ONLY
6-18
Software Upgrade
1 - Overview
411-1075A-001.1603
6-19 November, 2006
FOR TRAINING PURPOSES ONLY
A new version and edition of software can be downloaded remotely without any
operational impact, only modified files in the new version are downloaded.
Before any upgrade procedure, the equipment (BSC/TCU) must check its
hardware (flash memory checksum).
The execution of the upgrade is ordered by the OMC-R and controlled by the BSC
(in the OMU module), after the complete transfer of new files. Only the modules
that have modified software are downloaded again.
The first phase of the upgrade software can be made a long time before the
upgrade of a module. This allows the upgrading data to be transferred to the MIB
(Managed Information Base) located in the ”shared” disk located of the control
node. This operation is done when the BSC 3000 is working without any service
disturbance (except bandwidth reduction.)
Then, the control node sends upgrade orders to the CEM module that manages
the upgrade of the concerned module itself, without breakdown of the services that
are running.
6-19
Upgrade and Build On Line
Performances Improvements (1/5)
UPGRADE TYPE 4
CN
OMU_P OMU_A
CC1_1 CC1_2
V15.1
IN
411-1075A-001.1603
6-20 November, 2006
FOR TRAINING PURPOSES ONLY
In the current behavior, the CN is upgraded first and then IN upgrade is triggered.
This serialization of CN/IN upgrades was chosen to prevent interoperability issues.
In particular, this serialization prevented having an IN in N+1 release that interact
with a CN in N release.
This serialization of CN and IN upgrades can be alleviated provided that
interoperability is no more an issue when CN and IN are in heterogeneous
releases: CN in N release and IN in N+1 release. The IN upgrade will be triggered
as soon as the OMUs, CC1s and the first TMU have been upgraded successfully.
New CC1 upgrade behaviour ( previously no check done on the ATM-RM
status)
• Check that both ATM_RM are enable/on-line before
• Beginning the upgrade
• Upgrading CC1
ATM-RM new behaviour
• In V15.1 release, The ATM-RM is expected not to reset when it detects a loss
of signal on the OC3 fiber.
6-20
Upgrade and Build On Line
Performances Improvements (2/5)
BUILD ONLINE
• Active OMU Application restart instead of OMU reset
• Save AIX start up and disk mounting
In V14.3, the complete restart of the BSC is triggered by upgrade control node
manager that sends a control reset request to hardware management. Upon
reception of this control node reset request, hardware management resets first the
TMUs, the CC1, the passive OMU and finally the active OMU.
Actually, there is no need to reset the active OMU; only applications need to be
restarted to load the new data from the new MIB. This saves the AIX startup
latency and shared disks mounting. The average AIX star-up latency is around 4
minutes. For that purpose, upgrade CN sends a control node restart to hardware
management that triggers a backplane control node restart.
The backplane control node restart triggers the following actions:
• Stop all applications on the active OMU
• Reboot of the OMU_TM of the active OMU
• Restart all applications on the active OMU
• Check the shared disk
The clear config req impacts the upgrade control node manager that must not
reset the IN/TCU before resetting the control node during the activation of the new
MIB.
6-21
Upgrade and Build On Line
Performances Improvements (3/5)
UPGRADE OFF LINE
411-1075A-001.1603
6-22 November, 2006
FOR TRAINING PURPOSES ONLY
The omu application restart will be used to restart the control node instead of
resetting it during an upgrade type 6 or type 7. This OMU application restart
requires to reset the active OMU at the end of the offline upgrade making sure that
low level deliveries are loaded on the active OMU. This OMU reset must be
synchronized with load balancing and IN events.
The clear config can be leveraged during an offline upgrade to gracefully restart
the TCU instead of resetting it. For that purpose, the upgrade control node
manager will send a CLEAR_CONFIG_REQ to TCU instead of a
RESET_REQUEST.This way the TCU will be ready to be configured very shortly.
Note that the IN must still be reset since control node and IN are upgraded at the
same time.
A new OMU flash upgrade protocol has been proposed to decrease significantly
the upgrade offline downtime. This protocol relies on the OMU application restart
to shorten the latency of OMU flash upgrade. Precisely, this new protocol replaces
each control node reset by an OMU restart. Furthermore, this protocol enables
also to parallelize multiple tasks that were previously serialized.
6-22
Upgrade and Build On Line
Performances Improvements (4/5)
OMU Active Startup
2min 2min30
Active CP
Startup
1min30 Passive CP
Startup
3min 1min30
IN Critical Path config IN
2min 1min
TCU Clear Config Config TCU
1min 1min30
Downtime before first call: ~ 5min
Downtime before full duplex: ~ 7min
Main improvement:
• OMU passive startup is postponed at the end of the control node startup
concurrently to the startup of the passive core processes among TMUs.
• IN critical path duration decreases to two minutes (see section 4.7.1). This
enables the IN critical path latency to overlap entirely with the active OMU
startup duration. Note that the requirement differs for IN and TCU critical path
duration improvement. IN critical path must not exceed 2 minutes whereas
TCU critical path can be a little bit longer without any impact on the overall
BSC down time.
• Core processes are started-up concurrently on different TMUs.
6-23
Upgrade and Build On Line
Performances Improvements (5/5)
OMC BSC TCU
TGE backgroundTcuUpgrade (TCU e3, TC3vveeddpp.LIV, offline,…)
2034:begin
2024:cleared
Init_dialog_ack
PCM configuration
Fuzzy Period
2034:begin
Upgrade offline req
Startup Upgrade ack w/o reset
Upgrade 2024:cleared
2034:END
Currently, the TCU offline upgrade protocol requires to lock the TCU prior to
activate the upgrade. This TCU lock incurs a very long interruption of service
because the offline upgrade includes the TCU boards flash download via the
LAPD channels;
hence through a limited bandwidth. Recent performances measurements have
shown that the TCU software download is longer than IN software download by an
order of magnitude.
6-24
Software Upgrade
2 - OMU Software
Active OMU Passive OMU
Applications (N) OMU#1 Applications (N) OMU#2 The passive OMU
is reset and boots
A1 P1 with new software
and becomes active
A2 P2
411-1075A-001.1603
6-25 November, 2006
FOR TRAINING PURPOSES ONLY
6-25
Software Upgrade
3 - TMU Software: Principle
TMU#1 TMU#2 TMU#3 TMU#4 TMU#1 TMU#2 TMU#3 TMU#4
A1 P1 N P1 N A1
N Isolation
A2 P2
+
A2 P2 N
N reset
A P3 N A3 P3
N
P4 A4 P4 A4
411-1075A-001.1603
6-26 November, 2006
FOR TRAINING PURPOSES ONLY
The TMU upgrade is the most complex, because call processing is managed by
TMUs in real time during the upgrade, these modules are in N+P “load sharing”
redundancy and furthermore, the upgrade is performed without any interruption of
service.
The advantage of redundancy during a software upgrade is to manage “N” and
“N+1” versions together during transient states of the system with minimal risk.
The two software versions, N and N+1 are assumed to be fully compatible.
The upgrade is always executed concurrently with GSM traffic management
remaining active.
TMU modules are upgraded one by one as follows:
• One TMU is relieved of all its processes so that active processes and passive
processes are supported entirely by the other TMUs.
• When isolated, the TMU resets and boots on the new software version: the
TMU flash is rewritten at this time.
• Once recovered, the TMU (N+1 version) joins the TMU group (N version) to
retrieve the applicative processes it hosted previously to the upgrade.
6-26
Software Upgrade
4 - TMU Software: Upgrade Wave
TMU#1 TMU#2 TMU#3 TMU#4 TMU#1 TMU#2 TMU#3 TMU#4
A1 P1 N A1 P1 N+1
N N
A2 P2 A2 P2
N+1 N+1
A3 P3 A3 P3
N N+1
P4 A4 P4 A4
Thanks to N+P replication, no downtime should occur during this upgrade wave
A1 P1 N A1 P1 N+1
N N+1
A2 P2 A2 P2
N+1 N+1
A3 P3 A3 P3
N+1 N+1
P4 A4 P4 A4
411-1075A-001.1603
6-27 November, 2006
FOR TRAINING PURPOSES ONLY
To maintain the traffic management activity during the upgrade, the upgrade is
performed by “waves”, by one set of boards at a time:
• First, all the traffic is transferred to TMUs that are in version N.
• The other TMUs are isolated (the size of the wave is a configuration
parameter).
• The isolated TMUs are upgraded (software downloading, initialization, etc.).
• To avoid service interruption, passive members are first created on the newly
upgraded boards.
• Finally, activity is transferred to them.
• During the period of coexistence of the two releases, some restrictions may
apply depending on the compatibility level between both versions: no handover
between N and N+1 area, etc..
6-27
Software Upgrade
5 - CEM or RM Software Upgrade (ATM-RM, 8K-RM,
IEM) Active CEM or RM Passive CEM or RM
Applications (N) Applications (N)
A1 P1
A2 P2
A3 P3
411-1075A-001.1603
6-28 November, 2006
FOR TRAINING PURPOSES ONLY
For the CEM modules and the RMs, with the following redundancy factor: 1+1, the
upgrading of this protection group is done as follows:
• loading of the software packages is running inside the passive RM or the
passive CEM module,
• a SWACT is running between:
— the passive CEM module and the active CEM module,
— the active RM and the passive RM.
6-28
Software Upgrade
6 - TRM Software Upgrade
TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10 STEP 1
• Soft blocking on
N N N N N N N N N N the first module
• Load sharing on
TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10 each other TRM
modules
N+1 N N N N N N N N N • Load N+1 release
on TRM1
TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10 STEP 2
N+1 N N N N N N N N N • Soft blocking
on the second
module
TRM1 TRM2 TRM3 TRM4 TRM5 TRM6 TRM7 TRM8 TRM9 TRM10
• Load sharing
N+1 N+1 N N N N N N N N
on each other
modules
411-1075A-001.1603
6-29 November, 2006
FOR TRAINING PURPOSES ONLY
For the TRM with the following redundancy factor: N+P (P=1), the upgrading of
the protection group is done as follows:
• a “soft blocking” is sent to the TRM concerned,
• the new communications are distributed to another TRM,
• when the communications in progress inside the TRM concerned are
accomplished, then the software upgrading is done.
6-29
Hot Insertion/Extraction
1 - Overview
Automatic or half-automatic
Hot module insertion or extraction
plug and play
without service interruption
configuration capability
411-1075A-001.1603
6-30 November, 2006
FOR TRAINING PURPOSES ONLY
The hardware modules of the BSC 3000 have a hot insertion and extraction
capability. This means that a hardware module can be replaced or added in the
equipment without shutting down the machine even partly and without any impact
on service.
Furthermore, the BSC 3000 offers “plug & play” (or auto discovery) capability both
for equipment startup and for module hot insertion.
The modules are automatically detected, started and configured allowing an easy
and efficient maintenance of BSC 3000 and TCU 3000 hardware equipment.
The BSC 3000 and TCU 3000 report information about their hardware
configuration automatically to the OMC-R.
Because of this architecture, the “hot plug & play” feature does not apply to the
LSA-RC module: TIM and RCM boards are not involved.
Module extraction
When a module is extracted, a notification is sent to the OMC-R: this notification is
a state change to “disabled/{notInstalled}” of the object that was previously in the
slot. On reception of this state change, the OMC-R deletes the corresponding
logical object and removes it from the HMI and the MIB.
An alarm is generated at OMC-R level on the father object to indicate that a
module has been removed.
Hot extraction of the module can be performed without any tools, but the OMU
and MMS modules requires an operator action on the frontface pushbutton, using
a pencil.
6-30
Hot Insertion/Extraction
2 - Hot Insertion Procedure
MIB
BDE
(BDA)
TGE = Transaction Globale d’Exploitation
RGE = Réponse Globale d’Exploitation
411-1075A-001.1603
6-31 November, 2006
FOR TRAINING PURPOSES ONLY
Module insertion
C-Node (Control Node) and I-Node (Interface Node) objects are automatically
created when the user creates the BSC 3000 object on the OMC-R.
The Platform sends notifications indicating the hardware configuration. This
hardware configuration is detected on the corresponding platform object
(C-Node, I-Node, LSA or T-Node).
This information is stored on the MMS disk and sent to the OMC-R. It can be read
on the MMS disk, even when a module is out of service.
The information is also stored at OMC-R level and can be displayed upon operator
request.
Module hot insertion may be described as follows:
• module insertion by the craftsperson,
• hardware detection and BIST,
• front panel LED state depending on BIST results,
• verification by the craftsperson that the LED state is correct,
• hardware detection notification including BIST results sent towards the
OMC-R,
• the module is created at the OMC-R and is displayed on the HMI.
6-31
Fault Management
1 - Remote Maintenance Capability
411-1075A-001.1603
6-32 November, 2006
FOR TRAINING PURPOSES ONLY
BSC 3000 & TCU 3000 hardware management from the OMC-R is based on the
hardware detection capability of the new generation platform. All faults concerning the
components of an object are reported to the OMC-R.
The FM application is hierarchically structured: the processor, module, I-Node, C-Node,
BSC and each level of the OA&M function are able to detect, analyze, filter and react to a
fault if their level is able and authorized to manage this fault, because of the potential
system complexity of the fault.
For example, an ATM fault detected between the ATM SW and TMU modules can not be
corrected directly by the TMU/OA&M application, but only by the Control Node/OA&M
application located on the OMU module.
The OMU is the FM master module for the Control Node and the BSC 3000, it stores the
fault events on circular files and sends them to the OMC-R.
The CEM is the FM master module for the Interface Node. Two kinds of information are
sent by the Interface Node in the case of equipment failure:
• the state changes treated by the I-Node/OA&M application,
• the details of the fault, forwarded to the OMC-R for maintenance purpose.
There are two levels of fault:
• faults that do not impact the availability of the object: failure of an IEM (LSA-RC), a
CEM, an ATM or an 8K-RM,
• faults that make the object unavailable: failure of both cards or modules, failure on a
TIM of an LSA-RC.
In the case of hardware failure, a craftsperson needs to repair the failure by changing the
faulty module.
6-32
Fault Management
2 - On Board Inventory Information
Fast detection
ATM SW
ATM SW
Fix without any service
SIM B
OMU
OMU
- Filler -
TMU
TMU
TMU
TMU
TMU
TMU
TMU
interruption
Reliable diagnostic 1 22 3 4 5 6 7 8 9 10 11 12 13 14 15
- Filler -
- Filler -
- Filler -
identification
SIM A
MMS
MMS
MMS
MMS
TMU
TMU
TMU
TMU
TMU
TMU
TMU
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
TMU board
version xx
shelf 2
slot 3
serial xxxxx
411-1075A-001.1603
6-33 November, 2006
FOR TRAINING PURPOSES ONLY
The faults events sent to the OMC-R contain all the necessary information for
supervision and maintenance: type of fault, criticality, service impact, impacted
hardware.
Hardware failures are notified directly on the related hardware module, so that the
OMC-R can display the failed equipment precisely to the operator.
On board inventory information for the equipment (BSC 3000 and TCU 3000):
• Physical location,
• Site,
• Unit,
• Floor,
• Row Position,
• Bay Identifier.
For the FRUs (Field Replaceable Units):
• Serial number (Corporate Standard 5014.00 compliant),
• Module Name (generic name of the module family),
• Module type (PEC code = product engineering code),
• Hardware release,
• Hardware position (shelf, slot).
6-33
Fault Management
3 - LED of all Modules in BSC 3000 and TCU 3000
(except MMS modules)
Not powered
BIST running
Module is active
Module is passive
Alarm state
411-1075A-001.1603
6-34 November, 2006
FOR TRAINING PURPOSES ONLY
All BSC 3000 & TCU 3000 modules have the same two LEDs on the upper part of
the front face of each module to facilitate on-site maintenance and to reduce the
risk of human error.
This table gives the description, combinations and states of the red LED and the
green LED for each module (except the MMS module) inside the BSC 3000
cabinet and the TCU 3000 cabinet.
6-34
Fault Management
4 - LED of MMS Modules in the BSC 3000
Alarm state
411-1075A-001.1603
6-35 November, 2006
FOR TRAINING PURPOSES ONLY
This table gives the description, combinations and states of the red LED and the
green LED for the MMS modules in the BSC 3000 cabinet.
The round yellow led is blinking on the disk to indicate, read/write operation.
6-35
Remote ACcess Equipment RACE
1 - HTTP/RACE Server on an OMC-R WorkStation
OMC-R
Site RACE Terminals ETHERNET
Server Server
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
OMC-R
Server Modem Modem Modem
IP Network
PSTN
Intranet/ BSC Site
Internet BSC
3000 TML RACE
Modem
Firewalls
Modem
BTS Site
RACE
TML BTS TML
S8000 BTS Modem RACE
S4000/
S12000 S2000E
S18000
411-1075A-001.1603
6-36 November, 2006
FOR TRAINING PURPOSES ONLY
6-36
Remote ACcess Equipment RACE
2 - Overview
OMC-R
Mmi
WWW
Kernel
Server
HTTP
server
Web
browser
411-1075A-001.1603
6-37 November, 2006
FOR TRAINING PURPOSES ONLY
This new application is composed of Web pages and Java applets that can be run
through a Web navigator (Netscape or Internet Explorer).
This new application is adapted to individual operator needs: when the operator
must work from home, or when operations from BTS or BSC sites are required.
A better presentation of the data allows the customers to save time: for instance,
an operator had to modify a list of parameters and could make a mistake:
• with the ROT, it was mandatory to re-enter all the information,
• with the RACE, using the “Back” button of the navigator, he just has to modify
the wrong parameters.
The unique requirement to let this feature run is to have a Web browser, which
brings two advantages:
• all data are stored on the server and are downloaded at connection, so the
installation of a RACE client is done very quickly and then there is almost no
upgrading to be provided on the client side,
• the operator can use a PC to connect to the OMC-R; such an OMC-R station is
cheaper than a Unix station.
Finally the RACE can run on either an OMC-R WorkStation or an OMC-R server,
with a standard Internet browser for Unix.
6-37
Local Maintenance Terminal TML
1 - Overview
BSC 3000
Physical
path
HTTP HTML Manager
Server JAVA
411-1075A-001.1603
6-38 November, 2006
FOR TRAINING PURPOSES ONLY
6-38
Local Maintenance Terminal TML
2 - Principle
http://mmm.ii.jjj.kk/BSC3000. html
WEB
Browser
HTTP
server
Download html page HTML
and Java applet JAVA
Try connection
TML
Send USER and PASSWORD
Application
Send commands Test
Server
Receive answers
411-1075A-001.1603
6-39 November, 2006
FOR TRAINING PURPOSES ONLY
Using a web browser, the TML operator loads an HTML page (through HTTP)
holding the TML applet. The TML applet is then downloaded to the TML PC using
the HTTP server.
Once the TML software is loaded in the TML PC, it is possible to start a test
session. The messages exchanged between the TML and the BSC are done
through a TCP/IP connection.
The TML communicates with the “Test server” software module.
The TML accesses the MIB for:
• modification of commissioning data:
— OMC-R link definition (IP, direct, …),
— PCM trunk setup,
— physical location definition (name, floor),
• checking software and hardware marking information.
6-39
Student notes:
6-40
BSC 3000 and TCU 3000
Provisioning
Section 7
nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY
7-1
Objectives
411-1075A-001.1603
7-2 November, 2006
FOR TRAINING PURPOSES ONLY
7-2
Contents
411-1075A-001.1603
7-3 November, 2006
FOR TRAINING PURPOSES ONLY
7-3
BSC 12000HC and BSC 3000 Comparison
Maximum values For one BSC 3000 For three BSC 12000HC For one BSC 12000HC
Erlang 3000 3600 1200
TRX 1000 960 320
Cells 600 480 160
BTS 500 414 138
LAPD links 567 120 40
SS7 links 16 18 6
E1/T1 links 126/168 144/144 48/48
A circuits 3112 3780 1260
Power consumption (kW) 2.0 5.1 1.7
W: 96/37 D: 60/23 W: 468/182 D: 60/23 W: 156/61 D: 60/23
Cabinet dimension (cm/in ) H: 200/78 H: 200/78
H: 220/86
Weight (kg/lb) 570/1254 1620/3564 540/1188
2
Floor load (kg/m _ lb/ft²) 1000/205 600/120 600/120
TCU 2G / TCU 3000 32 / 2 36 / 0 12 / 0
7-4
TCU 2G and TCU 3000 Comparison
Maximum values of For one TCU 3000 For thirty TCU 2G For one TCU 2G
7-5
BSC 3000 Provisioning
1 - BSC 3000 versus BSC 2G ATM SW
1+1
TMU
300 E
Traffic N+P
Management
SITE (w/ 1 LAPD Channel) 240 300 300 360 420 480 500 500 500 500
Not
SITE (w/ 2 LAPD Channel) standard 120 150 150 180 210 240 270 300 300 300
SITE (w/ 3 LAPD Channel) 80 100 100 120 140 160 180 200 200 200
N+P Redundancy
411-1075A-001.1603
7-6 November, 2006
FOR TRAINING PURPOSES ONLY
The redundancy concept of the BSC 3000 / TCU 3000 is different from the 2G
BSC/TCU.
We are not speaking any more about two redundant chains, but about a per
module or per card (LSA) redundancy.
Except for the TMU module for which an N+P redundancy is implemented, all the
modules are 1+1 redundant.
Entire Call Processing of the BSC 3000 is based on the TMU module and the
dimensioning for this module is based on the estimated traffic load (maximum 300
Erlang per TMU).
The estimated traffic for each site is calculated by the BSC 3000 by taking in
account the sum of each cell traffic (based on the number of TRX per cell).
The BSC 3000 estimates alone the number of TMU needed to reach the capacity
required by the Site/Cells/TRX configured by the OMC.
If the calculated number of TMUS is less than the installed TMUs, then the BSC
notifies to OMC via Load Balancing Anomaly how many TMUs are needed in
order to reach the required capacity.
Due to the fact that in case of the BSC 3000 there are no more fix configuration
(like type1...5 in case of BSC 2G), the OMC-R verify only the maximum
dimensioning of the BSC 3000.
7-6
BSC 3000 Provisioning
2 - Number of TMUs
– P is the minimum number of TMUs needed to run all the passive processes.
– 2 is the number of TMUs needed to run SS7 active and passives processes.
Capacity 600 900 1200 1500 1800 2100 2400 2700 3000
Number of TMU Erlang Erlang Erlang Erlang Erlang Erlang Erlang Erlang Erlang
M 2 3 4 5 6 7 8 9 10
P 1 1 1 1 2 2 2 2 2
SS7 2 2 2 2 2 2 2 2 2
Total 5 6 7 8 10 11 12 13 14
411-1075A-001.1603
7-7 November, 2006
FOR TRAINING PURPOSES ONLY
In order to have a well balanced processing load between TMUs, two mechanisms
have been implemented:
• The BSC 3000 makes a site distribution per Cell Group using a special
algorithm so that a number of equally charged Cell Groups can be obtained:
— a Cell Group is a logical entity containing several sites,
— all the Cells and the TRXs belonging to one site are in the same Cell
Group.
• The Cell Groups are then distributed over the existing TMUs. A TMU is
capable of managing up to 16 Cell Groups (8 active and 8 passive). The active
Cell Groups from one TMU will have their passive instance on another TMU.
Taking into account these considerations, the BSC 3000 capacity can be defined.
7-7
BSC 3000 Provisioning
3 - Abis LAPD Channels
Total number of
GSM Object TMUs 1 2 3 4 5 6 7 8 9 10 11 12 13 14
411-1075A-001.1603
7-8 November, 2006
FOR TRAINING PURPOSES ONLY
7-8
BSC 3000 Provisioning
4 - TMU2
N 2 2 3 3 4 4 5 6 6
P 1 1 1 1 1 1 1 1 1
SS7 2 2 2 2 2 2 2 2 2
Total 5 5 6 6 7 7 8 9 9
N 1 2 3 4 5 6
P 1 1 1 1 1 1
SS7 2 2 2 2 2 2
Total 4 5 6 7 8 9
LAPD Channel Not supported 110 220 330 440 550 567
411-1075A-001.1603
7-9 November, 2006
FOR TRAINING PURPOSES ONLY
TMU2 objective is to have a capacity being 1.75 times the current TMU1 one in
terms of Erlangs processing capabilities and twice in terms of signaling
(LAPD/SS7) ports offered. This means that the maximum BSC 3000 Erlangs
capacity may be reached with only 9 TMU2 (7 instead of 12 for GSM call
processing applications and 2 for SS7 management).
7-9
Mixed TMU1/TMU2 Configurations
> Erlang Capacity Vs TMU1 & TMU2 number
0 1 2 3 4 5 6 7 8 TMU19 10
TMG
Erlang Capacity vs TMU1 and TMU2 number (SS7 TMU and redundant TMU not taken into account)
411-1075A-001.1603
7-10 November, 2006
FOR TRAINING PURPOSES ONLY
As TMU2 capacity is bigger than TMU1, dimensioning rules regarding the number
of needed TMUs for a chosen target erlang capacity are modified.
7-10
Mixed TMU1/TMU2 Configurations
> LAPD Capacity Vs TMU1 & TMU2 number
0 1 2 3 4 5 6 7 8 TMU1
9 10
TMG
LAPD Capacity vs TMU1 and TMU2 number (SS7 TMU and redundant TMU not taken into account)
411-1075A-001.1603
7-11 November, 2006
FOR TRAINING PURPOSES ONLY
7-11
TRM2 Dimensioning
Dimensioning figures:
• FR archipelago capacity: 96 circuits (vs 72 for TRM board)
• EFR archipelago capacity: 96 circuits (vs 72 for TRM board)
• AMR archipelago capacity: 96 circuits (vs 60 for TRM board)
• EFR_TTY archipelago capacity: 84 circuits (vs 48 for TRM board)
Thus the capacity of a TRM2 using three FR, EFR or AMR codec will be
288 circuits.
Nb of TRM2 Nb of Nb of
Nb of TCU Nb of TRM2 (with Nb of LSAs Nb of SS7 Capacity
w/o voice Ater
shelves/BSC redundancy) E1 channels (Erl)
redundancy) channels LAPD
1 1 1+1 1 288 3 2 247
1 2 2+1 2 576 4 4 521
1 3 3+1 2 864 5 4 798
1 4 4+1 3 1152 7 6 1078
1 5 5+1 3 1440 9 6 1359
1 6 6+1 4 1728 9 8 1641
1 7 7+1 4 1944 16 8 1923
Dimensioning for configurations without any EFR_TTY code configured
411-1075A-001.1603
7-12 November, 2006
FOR TRAINING PURPOSES ONLY
The dimensioning rules, regarding the number of needed TRM boards in a TCU
3000 cabinet, take into account the TRM capacity in terms of maximum number of
terrestrial circuits that can be managed.
7-12
BSC 3000 Provisioning
5 - GPRS Impact
Abis HLR
SMSC EIR
Ater
Agprs
PSPDN
BSC 3000
V15
PCUSN SGSN GGSN
PCUSN - Packet Control Unit Support Node
PSPDN - Packet Switched Public Data Network
SGSN - Serving GPRS Support Node
GGSN - Gateway GPRS Support Node
411-1075A-001.1603
7-13 November, 2006
FOR TRAINING PURPOSES ONLY
GPRS entails no BSC capacity decrease in terms of processing. In other words, the
processing power of the TMU and of the other processing boards is not a limiting factor
for GPRS dimensioning.
Only the PCM connectivity (Abis + Ater + Agprs) and the circuit switching capacity of the
BSC 3000 have to be taken into account for GSM and GPRS network engineering.
In urban areas, the BSC 3000 has enough PCMs available so that the GPRS introduction
can be done without any PCM dimensioning constraints.
For example a maximum capacity BSC 3000 managing a BSS network made mainly of
S444 BTSs, will need around 90 PCMs for Abis and Ater, out of 126.
Therefore, whatever the GPRS profile is, there will be enough additional PCMs available
for Agprs.
In rural areas (BTS S111 & S222), all PCMs might be used for voice service only.
The introduction of GPRS can then impact the BSC 3000 capacity in terms of the number
of managed BTSs & TRXs.
The maximum circuit switching capacity of the BSC 3000 (2268 64-kbit/s circuits) shall be
taken into account in the dimensioning of a voice + GPRS network.
The switching capacity is not a limitation for voice-only and for low-speed GPRS services
(CS1/CS2).
For high-speed data services, since the radio time-slots carrying those services require
more circuits on Abis and Agprs (2 to 4 times more than for voice and low-speed packet
data), the BSC 3000 switching capacity limit can be reached for some network
configurations, especially for high data penetration (for example 8 radio TS per cell for
GPRS).
The impact on BSC 3000 capacity in terms of the number of managed TRX has to be
determined on a case-by-case basis, according to the network configuration.
7-13
BSC 3000 and TCU 3000 Configurations
1 - Min and Max Configurations
411-1075A-001.1603
7-14 November, 2006
FOR TRAINING PURPOSES ONLY
This table gives the dimensioning factors for the BSC 3000 & TCU 3000 in
minimum and maximum configurations.
BSC configuration
• The minimum configuration is a 600 E, which translates to 3 TMUs (2+1 for
redundancy) and 2 LSAs (42 E1 or 56 T1 PCMs).
• The maximum configuration is a 3000 E, which translates to 12 TMUs (10+2
for redundancy) and 6 LSAs (126 E1 or 168 T1 PCMs); the TCU function will
require two Transcoding nodes.
Between these two configurations, all configurations can be offered, never less
some product engineering rules are defined to avoid inconsistency between the
number of TMUs and the number of LSAs.
TCU configuration
• The minimum configuration is a 200 E TCU 3000, which translates, in the case
of Enhanced Full Rate, to 2 TRM modules (1+1 redundant) and 1 LSA (21 E1
or 28 T1 PCMs).
• The maximum configuration is a 1800 E: up to 11 TRMs (10+1 redundant) and
4 LSAs in each of the 2 nodes of a TCU cabinet. The TCU 3000 cabinet can
be connected to the same BSC or to 2 different BSCs.
Note: The TCU 3000 can have a maximum of 12 TRMs modules if required.
Between these minimum and maximum configurations, all configurations can be
offered. Nevertheless, in the TCU 3000 the number of TRMs and the number of
LSAs are directly related to the required A interface capacity.
7-14
BSC 3000 and TCU 3000 Configurations
2 - BSC 3000 and TCU 3000 Typical Examples
411-1075A-001.1603
7-15 November, 2006
FOR TRAINING PURPOSES ONLY
Nortel Networks will define some market model configurations (rural, semi-urban,
urban, etc.) and some optional extension kits (comprised of TMU, TRM, LSA) in
order to satisfy most of the product configurations required by customers:
• a rural type of configuration, with a relatively low number of TMUs (because
the traffic capacity is low) and a maximum number of LSAs (because many
small BTSs used for coverage need to be connected),
• an urban type of configuration, with a high number of TMUs (high traffic
capacity) and a relatively low number of LSAs (because BTSs have many
TRXs per cell, and there are relatively few BTSs to be connected to the BSC).
Market models and market packages are defined both to optimize the end-to-end
supply chain from the order to the delivery of the products to the customer, and to
satisfy most of the configurations requested by the customers.
The market packages allows a market model to be modified by adding extension
kits, to fit as closely as possible to the customer request.
7-15
Student notes:
7-16
Exercises Solutions
Section 8
nortel.com/training
411-1075A-001.1603
November, 2006
FOR TRAINING PURPOSES ONLY
8-1
Objectives
411-1075A-001.1603
8-2 November, 2006
FOR TRAINING PURPOSES ONLY
8-2
Contents
411-1075A-001.1603
8-3 November, 2006
FOR TRAINING PURPOSES ONLY
8-3
Internal BSC Dialogues
Control Node
OAM OMU
OAM OMU MMS TMU TMU TMU
MMS
ATM SW ATM SW
Traffic Traffic Traffic
Management Management Management
Plane 1
Plane 2
Interface Node
Switching Unit
ATM RM ATM RM
CEM
ATM/PCM ATM/PCM
Interface 64 kb/s Interface
LSARC LSARC
LSARC LSARC
8K RM
To To
PCM PCM
BTSs Controller TCUs
Controller 8 kb/s
411-1075A-001.1603
8-4 November, 2006
FOR TRAINING PURPOSES ONLY
8-4
Internal BSC Dialogues
Control Node
OAM OMU
OAM OMU MMS TMU TMU TMU
MMS
ATM SW ATM SW
Traffic Traffic Traffic
Management Management Management
Plane 1 Plane 2
Switching Unit
ATM RM ATM RM
CEM
ATM/PCM ATM/PCM
Interface 64 kb/s Interface
LSARC LSARC
LSARC LSARC
8K RM
To To
PCM PCM
BTSs Controller TCUs
Controller 8 kb/s
Interface Node
411-1075A-001.1603
8-5 November, 2006
FOR TRAINING PURPOSES ONLY
8-5
Traffic (Circuit and Packet Switch) Path
TMU TMU
Traffic Traffic
Management Management
OAM OMU
PCU
ATM SW ATM SW
Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
BTS
411-1075A-001.1603
8-6 November, 2006
FOR TRAINING PURPOSES ONLY
8-6
GSM Signaling Path
TMU TMU
Traffic Traffic
Management Management BTS LAPD signaling
OAM OMU
Full TS LAPD signaling
ATM SW ATM SW LAPD signaling on ATM
Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
BTS
LSARC LSARC LSARC LSARC
8K RM
PCM PCM PCM TRM PCM
Controller Controller Controller Controller
Vocoders
MSC
411-1075A-001.1603
8-7 November, 2006
FOR TRAINING PURPOSES ONLY
8-7
GSM Signaling Path
MTP1 & MTP2: TMU A
TMU TMU
MTP3 & SCCP: TMU B
A B
Traffic Traffic
Management Management
OAM OMU
411-1075A-001.1603
8-8 November, 2006
FOR TRAINING PURPOSES ONLY
8-8
BSC 3000/TCU 3000 Dialogue
TMU TMU
OA&M and
Call Processing (1/2)
Traffic Traffic
Management Management
OAM OMU
Same TMU
ATM SW ATM SW
Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
64 kb/s 64 kb/s
411-1075A-001.1603
8-9 November, 2006
FOR TRAINING PURPOSES ONLY
8-9
BSC 3000/TCU 3000 Dialogue
TMU TMU
OA&M and
Call Processing (2/2)
Traffic Traffic
Management Management
OAM OMU
Different TMUs
ATM SW ATM SW
Control Node
BSC TCU
Interface Node Transcoding Node
ATM RM ATM RM
Switching Unit
ATM/PCM CEM ATM/PCM CEM
Interface Interface
411-1075A-001.1603
8-10 November, 2006
FOR TRAINING PURPOSES ONLY
8-10
Student notes:
8-11
Student notes:
8-12