Sie sind auf Seite 1von 21

INTRODUCTION

The current Internet is suffering from its own success. The number of users and the variety of applications demanding more and more bandwidth keeps on increasing day by day. These ever-increasing demands need ever-increasing bandwidth. Here optical communication comes into the picture. It provides huge amount of bandwidth and leads to the popular concept of optical Internet. The potential of optical fiber was realized fully when wavelength division multiplexing (WDM) was invented. It was determined that with wavelengths and values typically used in optical networks today it is theoretically possible to transmit data rates of up to 1 Tb/s. Recent Wavelength Division Multiplexing (WDM) experimental results show successful transmission of data over a single optical fiber at an aggregate speed of 1 Tb/s, spread over more than 256 independent wavelengths. As the number of wavelengths per fiber increases, converting data between the optical and electronic domains becomes a critical bottleneck in terms of cost, size, processing speed and power consumption. In order to realize potential fiber bandwidth and WDM gains fully, the number of such conversions must be minimized. Optical Burst Switching (OBS) has recently been proposed as a future high-speed switching technology that may be able to efficiently utilize extremely high capacity links without the need for data buffering or optical-electronic conversions at intermediate nodes. However, contention between bursts may cause loss within the network. Proposals to date for OBS have yielded very high loss rates even for moderate network loads.

CURRENT HIGH SPEED NETWORKS

SONET Synchronous Optical Network (SONET) and the closely related Synchronous Digital Hierarchy (SDH) standards are the predominant technologies in todays carrier networks. All-optical networks are transparent and are therefore data format independent. While the data carried inside optical streams in an all-optical

network may indeed be SONET formatted, the associated SONET protocols are restricted to nodes at the edge of an all optical network and therefore do not affect the operation of the all optical network. SONET is an example of a network protocol that carries critical control information inside the framing format, control information that needs to be read at each node in a SONET network. SONET employs sophisticated multiplexing techniques to interleave synchronous streams of electronic data from the basic signal rate of approximately 51.84Mb/s (STS-1) up to a maximum theoretical rate of approximately 40Gb/s (STS-768). All other SONET rates are integral multiple of this rate, so that an STS-N signal has a bit rate equal to N times 51.84 Mb/s. SONET is a synchronous system with frames sent every 125s. To achieve higher speeds, individual STS-1 frames are aggregated together using byte-interleaving or through the use of larger frame sizes, usually referred to as concatenated frames. The two main node types in a SONET network are Add/Drop Multiplexers (ADMs) and Digital Cross Connects (DCCs). ADMs are designed to pick out one or more low-speed streams from a high-speed stream and also similarly insert one or more low-speed streams into a high-speed stream. A DCC is a more advanced node that, in addition to ADM functionality, can groom traffic. Grooming allows composite low-speed streams to be individually switched, resulting in fine grained control at the expense of increased complexity and port count.

SHORTCOMINGS OF SONET

The success of SONET has been largely due to the comprehensive functionality of the additional control information carried along with the frame. This overhead includes functions to manage performance, faults, configuration and security but has a significant drawback: to control the network, this overhead and therefore each frame needs to be read at each node. This means that each frame must be received in the optical domain, converted to electrical form and then retransmitted in the optical domain. This process is called Optical-Electronic-Optical (OEO) conversion. In addition to these conversions at every node, several electrical regenerators may need to be placed between each node to restore the output signal level, reshape the pulses and retime the signal. As a consequence, high speed

SONET, such as STS-768, is prohibitively expensive due to the large number of high speed OEO converters required. An important side-effect of OEO conversion is that the process is code, protocol and timing sensitive. . The combination of these characteristics result in provisioning and upgrading being extremely complicated and lengthy processes, often taking up several weeks or even several months. The coarse granularity of SONET also may cause significant inefficiency . For example, a customer can only upgrade from an STS48 (2.5Gb/s) to an STS-192 (10Gb/s) if more capacity is required. Another significant inefficiency is due to the Time Division Multiplexing (TDM) nature of SONET. Even if a fraction of the capacity is being used to transmit useful data, the excess capacity is not available to other users. Each connection is logically circuit switched and therefore aggregating many connections gives no multiplexing gain. As a consequence, SONET networks must be dimensioned with respect to peak load for each of its composite streams. Furthermore, SONET is a single wavelength technology. Given that more than 256 wavelengths can be used simultaneously on a single fiber, this limitation has forced the rapid development of alternative network infrastructures and protocols. As seen from the above disadvantages SONET is unsuitable for future, high speed networks. Instead, what is required is a set of protocols and associated network infrastructure that both overcome the problems with SONET, yet do not introduce significant new problems themselves. More precisely, the ultimate research goal is the development of a new scheme that does not require extensive OEO conversions, can be rapidly provisioned and upgraded, is independent of payload data formats, uses bandwidth efficiently and most importantly, can scale to large numbers of wavelengths per fiber. This scheme will be most useful in the cases of high levels of aggregation of users and therefore of greatest importance in the network core. Current research focuses on three main technologies that solve most or all of the above problems: Optical Circuit Switching (OCS), Optical Burst Switching (OBS) and Optical Packet Switching (OPS).

OPTICAL SWITCHING
The appearance of enhanced multimedia services requiring huge bandwidths, such as broadcasting of high definition television(HDTV), video on demand, online gaming created a need for transitioning to high switching speeds. As the network traffic volume rises, particularly from the desire to have high bandwidth multimedia services, the number of wavelengths per fibre will increase. This means that changes are needed for the earlier switching methods in which optical signals are converted to electronic signals then processed, regenerated, switched electronically and then converted back to optical format. The main reason is that performance of electronic equipments used in this OEO conversion process is strongly dependent on the data rate and protocol and also for long distance fibers, the cost also increases. The development of the Erbium-Doped Fiber Amplifier (EDFA) in the late 80s drastically reduced the need for electronic regenerations. This device is capable of amplifying many wavelengths simultaneously, yet is insensitive to bit-rates, modulation formats and power levels.

ALL-OPTICAL SWITCHES

Now that regenerators could be removed from optical links, switching nodes became the electronic bottleneck. If no conversion to electronic form of a data stream occurs within a switching element, this element is called an all-optical switch. Furthermore, due to optical technology constraints, data within optical signals cannot be read in the optical domain. Therefore all-optical switching from input wavelength to output wavelength is called transparent switching, in contrast to opaque switching where conversion to the electronic domain is required for the switching process. Assuming control information can be separated from the main data signal and received electronically at each node, then the required functionality of an all-optical switch is simply being able to transparently switch a given input wavelength on a given input fiber to a desired output wavelength on a desired output link. Several technologies that achieve this goal have been recently

developed,

including

micro-electro-mechanical

switches

(MEMS).

This

technology is already employed in commercially available switches such as Lucents Lambda Router and was recently sold to Japan Telecom to connect major metropolitan areas across Japan1. MEMS consists of an array of tiny mirrors that move when an electrical current is applied. By adjusting the tilting angle of one or more mirrors, optical signals can be switched from input to output fibers. 3D MEMS is an extension of this technique in which mirrors are positioned in a three dimensional matrix and rotate on two axes, enabling mappings between a much greater number of input and output ports. Calient Networks Diamond Wave PXC photonic switch is an example of currently available switches utilizing 3D MEMS technology to achieve 256x256 switching capacity. Researchers at Lucent believe that multithousand port fabrics appear to be physically realizable, with the potential of switching capacity 2000 times greater than that of currently struggling electronic fabrics. Furthermore, the average loss experienced by a MEMS switch is extremely low. There are also several other all-optical switching of fluid, Semiconductor Optical Amplifiers (SOAs) and electro optic lithium niobate (LiNbO3). The latter two are capable of switching times in the nanosecond range however, SOAs add significant amounts of noise to optical signals, while LiNbO3 switches cause approximately 8dB of loss, limiting their scalability. In addition, both of these fast technologies are polarization sensitive.

OPTICAL CIRCUIT SWITCHING


To send information quickly and reliably across a network, service providers use various techniques to establish a circuit switched lightpath i.e. a temporary point to point optical connection between the two communicating ends. An OXC(optical crossconnect) is a key element to set up express paths through intermediate nodes for this process. Since an OXC is a large complex switch it is used in networks where there is a heavy volume of traffic between nodes. In such networks, the lightpath normally is setup for long periods of time. Depending on the desired service running between the distant nodes, this time connection can range from minutes to months and even longer.

Lightpaths running from a source node to a destination node may traverse many fiber links segments along the route. At intermediate points along the connection route, the lightpaths may be switched between different links and sometimes the lightpath wavelength may need to change when entering another link segment. This wavelength conversion is necessary if two lightpaths entering some segment happen to have same wavelength. This process of establishing lightpath is called wavelength routing or ligthpath switching.

However, as the number of wavelengths per fiber and the associated number of lightpaths required to be managed grows, the ability of circuit switching to scale is questionable. Given that once the circuit, or lightpath, has been established it is very difficult to change either the routing or the wavelengths used along the path without significant disruption, it is very important to choose initial values carefully. In todays large networks, this system optimization is largely done by human traffic engineers due to fear among network operators that automated solutions will possibly be unstable in practice, yielding both sub-optimal performance and poor reliability, a fear grounded in unsuccessful experiments with adaptive routing in the ARPAnet. Guaranteeing stability for complex ASONstyle networks may prove to be particularly difficult. Furthermore, circuit switching is burdened by a fundamental problem. Circuits, by definition, need to be provisioned for peak traffic intensity levels if loss is to be bounded over short to medium time scales. Therefore in non-peak periods, much of this allocated capacity may be unused, yet unavailable for other circuits in the network. To achieve useful levels of statistical multiplexing through capacity sharing, some type of packet switching must be used.

OPTICAL PACKET SWITCHING


Circuit switching is inherently inefficient given time-varying traffic intensity as the capacity reserved by the circuit is not shared. This loss in statistical

multiplexing capacity was the main motivation behind the development of packet based networks in the electrical domain and may cause a similar paradigm shift within the optical domain. The success of electronic packet switched network lies in their ability to achieve reliable high packet throughputs and to adapt easily to traffic congestion and transmission link or node failures. Various research have been carried out to extend this ability to all optical networks in which no OEO conversion takes place along a lightpath. In an OPS network, user traffic is routed and transmitted through the network in form of optical packets along with in-band control information that is contained in a specially formatted header or label. In OPS the header processing is carried out electronically and the switching of the optical payload is done in the optical domain for each packet. This decoupling between header or label processing and payload switching allows the packet to be routed independent of payload bit rate, coding format and packet length.

OLS(optical label swapping) is a technique for realizing a practical OPS implementation. In this procedure, optically formatted, packets which contain a standard IP header and an information payload first have an optical label attached to them before they enter the OPS network. When the payload plus label packet travels through an OPS network, the optical packet switches at intermediate nodes process only the optical header electronically. This is done to extract routing information for the packet and to determine other factors such as the wavelength on which packet is being transmitted and the bit rate of encapsulated payload. The payload remains in optical format as it moves through the network.

Ultimately, cost is the determining factor in the choice of network protocols. Adding computing to the network in the form of packet switching functionality was seen to be economically advantageous. The key difference between packet switching and circuit switching is that in the former, the routing of data is determined by the label or header of a discrete group of bits, while the latter simply maps an input port to an output port. As packets are routed individually, many packets from different sources, going to different destinations can share a common wavelength, leading to potentially high levels of statistical multiplexing and associated efficiency gains. There are three main limitations in optical packet

switching that are not present in the electronic equivalent: the lack of Random Access Memory (RAM) for buffering, the lack of sophisticated optical processing, and relatively slow switching speeds.

OPTICAL BUFFERING

It is currently impossible both to store an optical signal indefinitely and randomly access stored optical signals. In electrical packet switches, to avoid contention between packets arriving at similar times and destined for the same output link, packets can be queued for later transmission when the corresponding output link becomes free. In optical packet switches, such queuing of packets is not currently possible. Although there have been some promising discoveries, such as the chiropticene switch, optical RAM is still in the early stages of development and may never be achievable. A limited form of buffering is achievable in the optical domain; optical signals can be delayed by a fixed time period by sending them down an optical fiber that loops back to the input port. Such loops are called Fiber Delay Lines (FDLs). Delay times are simply the length of the loop multiplied by the speed of light, for example, 3km of fiber would give an approximate delay of 10s, or approximately the time taken for 10 packets of 1.5kB to be transmitted on a 10Gb/s link. However, 3km is quite a lot of fiber to install on every output port and to achieve variable-delays many different length FDLs must be used, adding to the complexity. Maintaining temperature stability is also difficult across long sections of fibre.

OPTICAL BURST SWITCHING


Optical burst switching (OBS) was first proposed in the late 1990s as a new means of providing telecommunications transport services. Optical burst

switching (OBS) is an optical networking technique that allows dynamic subwavelength switching of data. OBS is viewed as a compromise between the yet unfeasible full optical packet switching (OPS) and the mostly static optical circuit switching (OCS).

To support bursty traffic on the Internet efficiently, optical burst switching (OBS) is proposed as a way to streamline both protocol and hardware in building the future generation Optical Internet. By leveraging the attractive properties of optical communications and at the same time, taking into account its limitations, OBS combines the best of optical circuit switching and packet switching. The central concept of OBS is that rather than switching individual packets, the source should group packets up into a burst and switch the burst as a unit. This is the main advantage of OBS, it provides short time-scale statistical multiplexing that gives benefits to both network operators and users, whilst providing significantly higher efficiency than OPS for current optical device technologies.

FUNDAMENTAL OBS CONCEPTS AND ARCHITECTURE

OBS network architecture

Telecommunications networks are often organised in a three-stage hierarchicy: users connect through an access network; their traffic is then aggregated and groomed onto a higher capacity intra-city metro network; traffic bound for another city or country is then further aggregated onto the highest capacity backbone or core network. OBS is considered a candidate technology for backbone and metro network implementation. We consider an OBS network as providing data transport services to the next lower level of the hierarchy, whichever that happens to be. The only restriction is that the client network is assumed to be a packet switching network, and submits packets to the OBS network for transmission. Figure 1 shows client networks gaining access to the OBS networks transport services via edge routers.

Figure 1: An OBS network, showing key components: burst assembling, edge routers and core crossconnects.

The edge routers have the job of grooming and routing the client network traffic into the OBS network. The OBS network itself consists of optical burst switches connected by WDM fiber links. When a host in one client network, say A, wishes to send data to another host, say B, in a different client network, the client network routes As packets to the local edge router, X, based on its eventual destination address (i.e., B). The edge router X then uses the packets destination to determine how to route the packet through the OBS network. It will use the OBS network to transmit the packet to edge router Y in the client network to which B is attached. When the packet reaches Y , Y will route the packet on towards B, the destination. Nevertheless, it is possible to identify several key characteristics that distiguish OBS from traditional switching techniques. In an OBS network the gateways at the edge of the network are replaced with burst assembling edge routers, and the core switching elements are replaced with optical burst switches. The difference in the dynamic operation of OBS networks compared to OCS and OPS is firstly that the edge routers assemble packets bound for the same destination edge router (Y)

10

into bursts, and secondly that the OBS switches treat bursts as single entities for switching purposes, amortising switch setup overhead over many packets. A third difference is that each assembled burst is sent into the network according to a reservation protocol. The reservation protocol is similar to the circuit setup protocol of circuit switching. The header information (which in packet switching would be transmitted in-band and immediately ahead of the payload) is transmitted out-of-band on a separate control channel, and precedes the burst payload by an offset time. The OBS nodes then make resource reservations for the burst in advance, so that when the burst arrives, the nodes OXC is already preconfigured to switch it onto the correct output fibre and wavelength. The OXC connection is maintained only as long as the bursts holding time. This is a key difference to both packet switching and circuit switching. There are numerous different reservation protocols. Edge routers The term burst switching refers to the key concept of OBS, which is that the edge router, instead of forwarding the packets one at a time through the OBS network, assembles many packets headed for the same destination edge router into a much larger super-packet, known as a burst. The reason for doing this is to gain higher efficiency with slower switching technologies. A queue exists for each class for each destination. The process is depicted in the dashed ellipse in Figure 1. An edge router that assembles packets into bursts in this manner is known as a burst assembling edge router or more simply as a burst assembler (BA). Each BA will have one queue for each possible destination edge router. If the OBS core network supports service differentiation based on class of service (CoS) labels, then rather than one queue per destination there may be K queues per destination, given that there are K supported service classes. This is the situation depicted in Figure 2.

11

Figure 2: Architecture of a burst assembler. A queue exists for each class for each destination.

Incoming packets are routed to a particular queue based on their destination and CoS. When a queue satisfies a certain trigger condition, all packets in that queue are grouped into a burst and scheduled for transmission into the OBS core network. The BA sends a message into the network that notifies each OBS node along the intended path through the network of the imminent arrival of the burst, and requests transmission resources. This message is known as a control packet. The control packet specifies both the length of the burst in seconds (its holding time), and the offset time, as well as any other information about the burst that the OBS core nodes require (such as CoS). The offset time is the difference in time between the arrival of the control packet at an OBS node and the arrival of the first bit of the burst. This is illustrated in Figure 3.

Figure 3: Burst data preceded in time by the control packet.

12

Control packets are request messages, similar to the setup, tear-down and acknowledgement messages of circuit switching networks. Supposing that the bit rate of the source transmitter is R bits per second and that the burst contains x bits of data, then we have h = x/R seconds. Thus the holding time h is determined by the amount of data in the burst, which depends on the trigger condition used to decide when the queue contains enough data to send as a burst. This condition is the concern of burst assembly algorithms. Burst assembly algorithms fall into three main groups: timerbased algorithms, threshold-based algorithms, and hybrids of the two To give a concrete example of burst holding time, let us assume that a burst contains 100 packets and that the average packet size is 500 bytes. The size of the burst in bits is then 5008100 = 400, 000 bits. If the line rate is 10 Gbps, then the holding time is h = 400, 000/10109 = 40 s. In reality this would be augmented slightly by receiver synchronization and framing overhead and guard times.

OBS cross-connect architecture

Once the control packet is sent by the burst assembling edge router, in turn each optical burst switch decides whether the burst should be forwarded in its transmission or dropped. This is controlled partly by the reservation protocol used by the nodes. We consider the architecture of the individual nodes, which is illustrated in Figure 4.

13

Figure 4: Architecture of an optical burst switch.

The links of an OBS network are optical fibres bearing WDM optical data signals. The OBS node in Figure 4 consists of N input fibers and N output fibers, each carrying k + 1 WDM channels, {W1, . . . ,Wk,Wc}. The first k wavelengths on each fibre are de-multiplexed by N WDM demultiplexers, and the resulting N k distinct optical signals are switched to output ports by the OXC. The cross connected signals are then re-multiplexed onto the N output fibers. Meanwhile, wavelength Wc is tapped off to the Electronic Control Unit (ECU) and demodulated into electrical form (i.e. O/E conversion). This wavelength is called the control wavelength or control channel, and it is the transmission channel for the control packets. The control channel line rate may be significantly lower than the data channels line rates, since control packets are designed to have negligible length compared to the burst and have a one-to-one correspondence to bursts. Once the information in the control packet has been read, the first step taken by the ECU is to make a forwarding decision, i.e. which output fiber to switch the burst to. It then determines if the burst can be transmitted on the chosen output fiber by comparing the requested transmission interval with its current list of

14

reserved intervals on the wavelengths of that fiber and executing a channel allocation algorithm. If there is a free interval that fits the new request, the ECU records the new reservation on the chosen data wavelength and retransmits the control packet on the control wavelength of the chosen output fiber. The control wavelength is multiplexed back together with the data-bearing wavelengths. If no suitable free interval is found, the control packet is discarded and the data burst will be dropped when it arrives at the switch. For successful reservations, the ECU then uses a signalling interface to the OXC (shown in Figure 4) to set up a connection for each burst between its input and output fibre and wavelength. The connection is short lived, its lifetime depending on the reservation protocol used by the ECU and the burst holding time.

Burst Assembly

Semiconductor Optical Amplifier (SOA) and electro-optic lithium niobate (LiNbO3) all-optical switches are capable of switching times in the nanosecond range but have serious problems. Assuming these problems will not be quickly overcome, the time required to reconfigure an all-optical switch matrix is a significant fraction of, or even more than, the time taken to transmit an IP packet. Therefore, to achieve useful levels of efficiency, packets must be aggregated at the edge of an OBS network. The node where packets are aggregated is called an ingress node. After being switched through the OBS network, successfully received bursts are disaggregated into packets. The final node is called an egress node. A sample path in an OBS network is shown in Figure 1.

PATH RESEVATION

In order to achieve statistical multiplexing gains, the entire capacity of a network must remain unsegmented such that there is a single pool of unused bandwidth that is universally available. In the case of circuit switching, any unused bandwidth in a circuit is inaccessible to other circuits and therefore bursty traffic distributions result in very low utilization of the network. Early burst switching technologies, called Tell-and-Wait (TaW) and Tell-and-Go (TaG), were

15

developed in the early 1990s to reduce this inefficiency. Both Tell-and-Wait and Tell-and-Go attempt to reserve a short term circuit to deliver a burst of cells such that network capacity can be shared and subsequent multiplexing gains achieved. TaW sends a short request message that attempts to reserve bandwidth at each switch in the path. If the reservation is successful, an acknowledgement (ACK) is sent from the final node to the origin of the request message and the burst immediately sent on receipt of this ACK. If a reservation cannot be made at any of the nodes in the path, a Negative Acknowledgment (NACK) is returned to the origin of the request message along the reverse path and previously made reservations are freed. TaG, on the other hand, does not reserve any bandwidth in advance and sends burst whenever it is ready. Upon arrival of the header at an intermediate node in the path, capacity on the corresponding output link is reserved, given that sufficient capacity is available. In the case that sufficient capacity is not available, the burst is discarded and only the header forwarded to the final node, which then returns a NACK. The performance of these two protocols depends on the propagation delay of the path and the size of the burst. For large propagation delays with respect to the burst size, TaG outperforms TaW and vice-versa.

RESERVATION PROTOCOLS As mentioned, the OBS nodes ECU has two tasks, channel allocation and reservation protocol processing. Reservation protocols are frequently closely related to the channel allocation algorithm. Together, the two determine whether and in what manner transmission resources are allocated to a particular burst at each link in its path. A burst may need to traverse many fiber links in order to reach its destination, and at each link, there must be sufficient capacity to accommodate it. In OBS there are generally two types of reservation protocols-: 1- One-way reservation protocols 2- Two-way reservation protocols

16

One way reservation protocols The most well known basic reservation protocol is the Just Enough Time protocol (JET). It was proposed by Yoo and Qiao. Rather than using a two-way reservation protocol, JET is a one-way reservation protocol. Since burst holding times are much smaller than typical circuit holding times, the delay between sending the setup message (or control packet) and receiving an acknowledgement is an appreciable fraction of the holding time and could represent an undesirably long delay to the packets in the burst. It could also result in low utilisation of the edge routers access link to the OBS network. Thus, JET instead uses an unacknowledged one-way reservation algorithm. No acknowledgement is required. Instead, the source sends the control packet and then simply waits for a set offset time. Once offset time has elapsed, it sends the data burst itself. ECU knows when the burst is coming because the control packet tells it. This is one of the most important functions of the control packet, transmitting this information. This allows the ECU to implement the second important feature of JET, which is known as delayed reservation. In JET, the channel is only reserved for that period of time during which the burst will be traversing the cross-connect. The crossconnect is free to assign the channel to other bursts from different sources during the periods of time between the control packet and burst arrival times, leading to higher channel utilization. One alternative, which is also a one-way protocol, is known as Just In Time (JIT). The JIT protocol is similar to JET, but uses an acknowledgement from only the first cross-connect. Furthermore, the control packet does not carry timing information, the channel is reserved from the moment the control packet is received and processed, hence the offset time, which is determined by the first cross-connect, must be incorporated into the bursts channel holding time. In this case, it is important to have as small offset time as possible. Like in JET, once the source is informed of the correct offset time to use by the first cross-connect, it simply sends its burst at that offset, without waiting for acknowledgement of resources, thus the protocol is still a one-way protocol.

17

Two-way reservation protocols

In a two-way, acknowledged reservation protocol, the source burst assembler can easily retain the burst in memory and continue requesting transmission until the request succeeds. Thus delay has two interesting components in a two-way protocols, the burst assembly delay and the resource reservation delay. Several two-way or acknowledged OBS reservation protocols have been proposed out of which one is most prominently used. The most prominent is known as wavelength-routed optical burst switching or WR-OBS. WR-OBS varies from OBS/JET in two significant ways: first, it presumes much longer bursts; and second, it uses dynamic, acknowledged lightpath establishment to provide a dedicated channel for the transmission of each burst. Bursts are longer in WROBS because burst aggregation is assumed to take time T that is of the same order as the time required to request a lightpath. Given realistic network propagation delays, this is likely to be on the order of milliseconds; OBS/JET generally assumes burst lengths and burst assembly delays on the order of microseconds. A WR-OBS edge router collects packets for a burst until some condition is met that triggers the source to send a request for a lightpath to a central network controller. The aggregation then continues until an acknowledgement that the lightpath was successfully established makes its way back to the source. At this point the transmission of the burst begins. The condition on which the lightpath request is sent may either be that the amount of packet data collected exceeds some threshold, or that some limit on allowable delay has been reached.

CONTENTION RESOLUTION

Once bursts are assembled, they are launched into the network according to a reservation protocol. It is possible for the reservation protocol to fail if there are not enough resources for burst transmission. The resources of an OBS network are the wavelengths supported on each link. When a control packet arrives at a crossconnect, say at time t, the control unit decodes it to extract information about the offset time To and duration (holding time) h of the burst; its destination, and some other related information. The ECU uses this information to make a routing

18

decision. Given the bursts destination, it decides which output fiber it should use to forward the burst. It then considers whether there is a wavelength that is free from time (t + To) until (t + To + h) on the chosen output fiber. If so, the burst can be accommodated. If not, then there is said to be contention. An OBS crossconnect does not have the luxury enjoyed by electronic packet switches of delaying or queueing the burst indefinitely in the case of contention, because no optical technology can yet store data for an indefinitely long period of time There are four main methods for resolving contention-: 1. Wavelength Conversion: On contention, we try to make a reservation on a different output wavelength on the desired output link. 2. Fiber Delay Line (FDL): On contention, we try to make a reservation on the desired output wavelengthon the desired output link but at a different time. 3. Deflection Routing: On contention, we try to make reservation on desired output wavelength on a different output link. 4. Preemption: On contention, remove the contending reservation, then make reservation on the desired output wavelength on th desired output link.

ADVANTAGES

Table 1: Advantages of OBS.

DISADVANTAGES 1. Faces two technological bottlenecks: Processing speed and buffering.

19

2. Noise accumulation. 3. Burst dropped in case of contention.

CONCLUSION

The fundamental concepts of OBS are burst assembly and edge routers; reservation protocols, control packets and offset times; OBS node architectures; and contention resolution. Despite the fact that OBS was invented less than ten years ago, there is already a relatively large body of published OBS research. Regardless, there remain numerous open issues and challenges still facing researchers and engineers. The primary challenge is to move OBS from research into practical realisation, and then on into commercial deployment. Current research is overwhelmingly theoretical or simulation-based. Significant

investments, possibly funded in part by commercial interests, will be needed to develop components and testbeds to prove the viability of the ideas behind OBS. The realities of optical device physics pose significant challenges to realising the types of switches needed by OBS (fast switching speed, low loss and distortion, scalability), and other technical problems (such as offset time control and receiver synchronisation) can be envisaged today. Still further problems are likely to arise as implementations proceed as a lot has still to be done in optical field.

20

REFERENCES
1. Optical Fiber communications- Gerd Keiser. 2. Telecommunication Switching, Traffic and Networks- J.E. Flood. 3. Modeling and Dimensioning of Optical Burst Switching NetworksJolyon Ambrose Scoresby White. 4. Optical Burst Switching: Towards Feasibility- Craig Warrington Cameron. 5. Optcal Burst switching: A new paradigm for optical internet- Chunming Qaio, Mungsik Yoo. 6. http://en.wikipedia.org/wiki/Optical_burst_switching

21

Das könnte Ihnen auch gefallen