Beruflich Dokumente
Kultur Dokumente
Issue: Not able to make an internal phone call. The error is: High
traffic.
2 remote site are not reachable to each other
-phone 1 >> cucm >> phone 2
On webex:
The problem only happens if we call to a number that doe snot exist on the
other cluster,
then hang up and we call a number that exists.
I took CallManager traces with a range of 5 minutes and there were a lot of
files.
>>>
On the traces:
|CallingPartyNumber=1441
|DialingPartition=Internal
|DialingPattern=XXXX
|FullyQualifiedCalledPartyNumber=6317
|CallingPartyNumber=1441
|DialingPartition=Internal
|DialingPattern=XXXX
|FullyQualifiedCalledPartyNumber=6586
h323-uu-pdu
h323-message-body releaseComplete :
Problem Description:
The problem only happens if we call to a number that does not exist on the
other cluster,
then hang up and we call a number that exists.
For example call from CUCM 9 from 1441 to 6317 (this number actually does
not exists on
Cluster with CUCM 7) the number does not exists message comes up.
Then we call from 1441 to 6586 and get high traffic error message.
There is a route pattern "XXXX" on both cluster so when the call is made to
a number that
doe snot exists it is hitting that route pattern and sent back to the
previous cluster.
I hope everything is doing well. I would like to let you know that after
have double checked the logs I found that the call is getting into a loop.
You have a route pattern "XXXX" on cluster with CallManager 9 and also have
a route pattern "XXXX" on cluster with CallManager 7, so when we call to a
number that does not exists in the system we are actually hitting the route
pattern "XXXX" and the call is being sent back to previous cluster and came
into a loop for the route pattern "XXXX".
Please go to the inter cluster trunk and look for the section "Inbound
calls" > Calling search space > and make sure that calling search space
does
not have access to the partition belonging to the route pattern "XXXX".
Please do it for both inter cluster trunk.
ID:IPTUCM2
ICT looping can be created by a misconfigured route pattern. This can cause
Cisco CallManager to run high CPU for a long period of time or crash the
server. To solve this problem, Cisco CallManager has added logic in the
H.225 device (for trunk device only) to monitor the number of transit calls
outstanding. Transit call is the call that Cisco CallManager receives the
setup request for (or sends the setup request for) and does not yet receive
or send the first backward message. For example, call proceeding, call
progress, alert, connect, or release complete.
Cisco Call Manager runs a five-second timer to monitor the transit call
queue for the H.225 trunk device. If the number of the transit call queue
entry is greater than a pre-defined threshold, then, for a period of time
(default 30 seconds), all new incoming or outgoing call request to that
H225
trunk device will be rejected by sending release completed message with
cause code Switch System Congestion.
By throttling the new call request when there are unusual high number of
transit calls outstanding, call manager will be able to break the inter
cluster loop, and protecting itself from running high CPU. The 5 seconds
monitor timer is decided based on standard requirement that it will take up
to 4 seconds to receive the first backward message. The predefined
throttling threshold is 140 transit calls/5 seconds per inter cluster
trunk,
which is 28 transit calls/second per inter cluster trunk, which adds up to
100k BHCA per inter cluster trunk. The call throttling period can be
adjusted by a new call manager service
parameter,"TimerH225ICTCallThrottle",
the default value is 30 seconds. When this service parameter is set to 0,
which means there is no call throttling mechanism will be implemented. Test
result shows with 30 seconds of call throttling, the call manager will be
back down to normal CPU in less than one to two minutes.
--
--
Rishabh Marwaha
ICTs are able to reach partitions containing route patterns that go back
out
this same
ICT.
If this is the case, then we can make a new css for ICT trunk on the call
manager which only have access to
>>>>
I have seen the message once my self on a phone and the call flow was the
user trying to
call the voice mail system by pressing the messages button on the phone.
Other than that
I have not seen the error, but they say it happens regularly.
Do you have any configured VG ports which are not registered currently on
call manager.
Is this happening only for calls which are being forwarded to voice mail
numbers?
It seems like when the voice mail ports get exhausted, we see a similar
kind
of error on the phones.
15:00:54.671 |CcmCcmdbHelper::buildLineStructGivenLiteDnData -
getCallForwardDynamicRecordGivenNumPlanPkid() FAILED for device
name(CiscoUM1-VI1), numplan pkid(828c0f85-2a4f-638e-7088-cc3b906b370c) -->
IGNORING|*^*^*
Are we using a hunt list to route calls to different voice mail ports? If
yes, please reset the hunt list and then check if any such error occurs
again.
That message would be a call to that building's hunt list and no body
answered and it went to voicemail.
The problem we are seeing is when a user is trying to check there voicemail
from there phone by pressing the message button (2010) and the call is
going
to 8000.
When I setup the system I used the voicemail port wizard in the CM
administration. We have 24 ports configured and when I use the remote port
status monitor I can see that all of the ports are being used in a round
robin fashion. But only 1 or 2 ports are ever used a at time.
Are we using sip trunk to integrate call manager with unity connetions.
Also please check if we have high virtual memory usage on the server.
Please reset the hunt list in off hours and check if the same behavior
re-occurs.
Do send me the RIS data collector perfmon logs from RTMT for all the
servers
to check the memory usage.
Asked him how many ports is he using and how many ports are used at a time
such as "High
Traffic Try Again Later" error occurs. He said that he checked his line
group
configuration and saw that only two voice mail ports are added. He will be
adding other 22
of them in off-hours and will get back to me.
After this the issue is resolved