Beruflich Dokumente
Kultur Dokumente
Who would have thought that the wholesale telecoms interconnect market could be so exciting? I spent a day last week at the excellent IPX Summit in London, together with a host of experts in the field. What I learnt about was the continuing struggle the telecoms industry faces to move from a circuit to a packet-based world. In this newsletter I will focus on IPX as the most real example of how (not?) to create new and valuable quality-managed information delivery services.
bilateral agreements are driven not only by the desire to avoid the costs of hub intermediaries, but also by the desire to avoid reputational risks of call quality problems. Nirvana in this space are wholesale products that: are perfectly characterised, deliver exactly what they say they will, and when you compose them end-to-end (even when bought from multiple suppliiers) deliver a predictable outcome and customer experience.
Page 2
practise, you have session border controllers (SBCs) absolutely everywhere, shoehorning packet data back into a more circuit-like model. The value-add of these SBCs is somewhat opaque. (Readers from Genband et al are welcome to provide me with suitable enlightenment or self-justification!) There are three basic issues that have to be solved to create this new IPX-enabled value chain and make it work: 1. How to make the SLAs compose? Imagine we want to get from domestic network A to network F in another country. This goes via wholesaler B to wholesaler C to wholesaler D. We want SLAB-E =SLAB-C+SLAC-D+SLAD-E. Will this equation hold and be meaningful? 2. How to measure the service quality across the above, individually and collectively? You want to know whether you did what you said you would do. 3. How to detect and punish cheats who dont keep their SLA promises and break the chain? The wholesale operators have a long experience at managing fraud at all levels of the network business: leaves, branches and trunk. These are examined in the next few sections: were not just interested in how IPX works, but also in how it breaks.
Composing SLAs
An issue that afflicts the whole telecoms industry is that the metrics and thinking used to deliver TDM circuits are no longer helpful in a packet-based statistically multiplexed world. TDM circuits have fixed delay, and effectively zero loss. You can characterise them with a two parameters: their bandwidth capacity, and where they go to. It was easy to add up the delays end-to-end, and your capacity was that of the narrowest part of the path. Composing this system and its SLAs was easy. Voice capacity planning was also easy: Masters courses in telecoms engineering have been teaching Erlangs as their bread-and-butter for decades. Contrast this with IP networks, which arent like TDM circuits in the slightest. There is variable loss and delay. The bandwidths dont compose. Familiar approaches to SLAs fail in use: even if each party in the chain meets its SLA, the service can still fail, because when you compose the elements there is sporadically too much loss and delay, or too high a rate of variability. This means there is a basic challenge in coming up with a language in which to express SLAs in this new world. We dont have a generic application erlang framework to use yet.
Service measurement
Knowing what we are trying to achieve is one thing. Knowing whether we and our partners in the chain have achieved it is quite another. Given the complexity of these interconnection systems, which of the myriad metrics on offer should we be measuring? How can we create the technical structures to do so? What are the measurements I need to demand of other parties to whom I am handing traffic and paying them money?
Page 3
None of these issues have complete answers yet in the IPX world.
Detecting cheats
This is a big one. There is a huge incentive to cheat in this international interconnection game. A lot of money is at stake, and the parties dont share the incentives of domestic carriers who have to live with each other (and their regulator) for a very long time. I overheard the following statistics, but cannot vouch for their accuracy: The overall fraud loss in telecoms is around $40bn/year, of which $6bn is attributable to the international network interconnect market. Around 6% of market activity has some connection to fraud. Given that there are 9500 telcos that switch minutes, locating the fraud is hard, although there are only around 60 carriers that effectively control the whole market and its structure. Assigning responsibility often falls to the large wholesaler. The wholesaler may be clean, but still gets handed the problem as the one presenting the bill to the unhappy call originator. This fraud takes on many forms: origination fraud (e.g. someone hacking into a PBX and selling onward minutes); termination fraud (e.g. charging for calls that arent actually completed); missing trader and carousel VAT fraud; money laundering with sudden discount schemes; quality fraud, where you deliver below the quality level promised; and over-sharp business practises like exploiting accidental pricing errors and reselling those routes like crazy. So there is a lot of quick and dirty money to be made, and its easy to for criminals and terrorists to hide in this system. That means fraud management as a service is quite a large opportunity, with a complete fraud protection service being the end game. New standards have to be created to share information about fraud between the players, and this potentially calls for new trusted third parties. Incentives also matter, and not everyone has good ones. There is no IPX market in Africa to speak of, and wont be one for 10 years or more. The outsourcing of international switches there has complicated responsibility in a way that inhibits market development. Targeting salespeople on revenue, rather than margin, also offers scope for lots of bad behaviour. On top of this, the skill set looks evermore like that from email and web hosting, and the fraud management techniques they use. Is this a game best suited to traditional telcos, or does it open up a market entry opportunity for new players? After all, IPX creates whole new classes of potential fraud and arbitrage, and it is far from clear there is a good telco map of the criminal terrain.
Page 4
for quality delivery? As such, IPX feels somewhat trapped between a telephony-centric past and a cloud-centric future. Is the business model right? As one speaker said: IPX is a technology change, not a business model change; the business model will be changed by the edge, e.g. termination fee changes. But if the business model isnt right for the future, IPX wont succeed. It is far from clear that trying to charge at the granularity of individual calls is the right way forward. Is it fit-for-purpose? Offering guarantees of service assurance is still a supply-driven view of telecoms, not demand-driven one. Fitness-for-purpose is missing from the IPX model. This is troubling as the current TDM voice still struggles with this issue, so trying clever new things may not work out as planned. The problem is this: how can I express the technical requirements of my application, and be sure that a particular IPX service will deliver a good quality of experience (QoE)? The industry is again caught between a TDM world (stable timing, fixed delay, no loss) and a true assured cloud delivery model driven by a robust and reliable matching of supply and demand. Does it do what it says on the tin? Its all very well offering a variety of routing models, but will there be improved transparency (compared to TDM) of what is being provided, and discoverability of what is on offer? How can I know what direct routes you support vs "I have a mate who knows someone...". What is the impact of the lack of end-to-end control? IPX is spliced into chains that include Internet and non-assured/unmanaged delivery. Issues like WiFi offload are a domestic mobile operator choice, and the wholesaler doesnt care how the user got to access the domestic network. That limits the benefits of IPX to the weakest link in the chain, over which the wholesaler has no control. Is there an execution credibility gap? Philippe Bellordre of the GSMA launched a bit of a rocket into the room when he said the GSMA was proposing service-unaware end-to-end quality differentiation with service assurance. Thats a huge ambition, but given the struggles the GSMA has had with RCS and getting its own members to implement what they themselves asked for, can we take this seriously? Is the scope too narrow? The end-to-end QoS model on offer takes a very narrow view of quality. In practise the IPX service providers assume that the market is just for the lowest latency and non-time-shiftable traffic, which excludes a wide range of possible uses and users. Is it worth the price asked? For the pleasure of going over IP, there was even a suggestion from suppliers of pricing higher than TDM! Yet there is a demand-side expectation to pay less because something is on IP. Services like HD voice appear to gather the same charge as standard voice, so why bother? How many intermediaries are sharing a voice pie that isnt growing? Will it cost too much? These IPX networks are full of SBCs doing transcoding, which costs a lot (and need de-jittering and thus degrade QoE rather a lot). At the end of the day, services like HD voice are competing with OTT alternatives like Facetime and Skype, which are all free. IPX service implementation is typically bolted to the cost anchor of IMS, which readily sinks the business case.
Page 5
thinking that addresses the root causes of why IPX is late to market and slow to be adopted, and solves these basic research issues. We need a fundamental network science. This links together cost, QoE and network performance (i.e. quality) for packet networks, just as Erlangs did (and continue to do) for TDM ones. We need to be able to model the complete performance hazard space for any application, make appropriate trades of resources between users and uses, and price the resources accordingly. Specifically, we need to understand how IP networks involve performance and cost risks that didnt exist in the past. These networks don't have the properties telcos (and their customers) think they have. That means they need new quality assurance mechanisms that are native to packet-based statistical multiplexing. The IPX community implicitly knows all of this. However, what they appear to have done is to mistakenly equate control with quality. By putting in lots of SBCs into the path, there is an appearance of having control, for which you can charge. But these extra network elements kill quality and increase cost: the very thing wholesale operators think they can charge for, they are degrading. Thus there is a disconnect: operators are charging for the mechanisms, not for the quality outcomes at a level that is a strong proxy for delivered QoE to the user.
Page 7
applications by delivering on those bounded requirements. Only by demonstrably transferring QoE risk away from the end user can telcos differentiate themselves on quality, and hence establish a price floor for their services. To deliver such quality-managed services, telcos will have to both simplify their offer, and enhance the technology used to deliver it. For IPX that means three things have to change: SLA composability: These services have to use Quality Transport Agreements (QTAs), based on the quality attenuation (Q) performance refinement calculus we have developed. Service measurement: This has to be done using a multi-point approach, since that is the only way of directly measuring Q i.e. the cumulative loss and delay along any path and its statistical distribution. Detecting cheats: A new scheme needs to be established for the sharing of key performance data from probes along the paths, in order for cheats to be visible. Everyone will have to say what they do (QTAs) and then do what they say (i.e. assurance). The good news is that the underlying maths makes this detection both feasible and predictable. Lets say that again: the only generator of value here is moving information with bounded quality attenuation. The difference between success and failure is grasping the true nature of quality, its invariable impairment along a data path (i.e. Q), and hence how to measure Q and manipulate it.
Page 8
If you would like to discuss how to apply these techniques to your business, contact me at mail@martingeddes.com. I and my partners offer training, measurement and manipulation services for Q, which take you towards its mastery. We await your response, however poor and costly the current transport on offer to deliver it. Martin Geddes 13th October 2013
Page 9