Sie sind auf Seite 1von 7

Performance of a Speculative Transmission Scheme for Scheduling-Latency Reduction

(Synopsis)

Abstract:
This work was motivated by the need to achieve low latency in an input-queued centrally-scheduled cell switch for high-performance computing applications; specifically, the aim is to reduce the latency incurred between a request and response arrival of the corresponding grant. The minimum latency in switches with centralized scheduling comprises two components, namely, the control-path latency and the data-path latency, which in a practical high-capacity, distributed switch implementation can be far greater than the cell duration. We introduce a speculative transmission scheme to significantly reduce the average control-path latency by allowing cells to proceed without waiting for a grant, under certain conditions. It operates in conjunction with any centralized matching algorithm to achieve a high maximum utilization. Using this model, performance measures such as the mean delay and the rate of successful speculative transmissions are derived. The results
demonstrate that the latency can be almost entirely eliminated between request and response for loads up to 50%. Our simulations confirm the analytical results.

Introduction
A KEY component of massively parallel computing systems is the interconnection network (ICTN). To achieve a good system balance between computation and Communication, the ICTN must provide low latency, high bandwidth, low error rates, and scalability to high node counts (thousands), with low latency being the most important requirement. Although optics holds a strong promise towards fulfilling these requirements, a number of technical and economic challenges remain. Corning Inc. and IBM are jointly developing a demonstrator system to solve the technical issues and map a path towards commercialization. For background information on this projectthe Optical Shared Memory Supercomputer Interconnect System (OSMOSIS)and for a detailed description of the architecture.

System Analysis
Existing System: Brikoff-von-newmann Switch which is eliminate the scheduler. It incurs a worst-case latency penalty of N time slots. It has to wait for exactly N time slots for the next opportunity. Control and data path-latencies comprise serialization and deserialization delays, propagation delay, processing delay

between request and response. Disadvantage: The existing system is if n_packets sending source to destination it is exactly wait for n_time slots. Serialization and deserialization delays between request and response

Proposed System: We propose a novel method to combine speculative and scheduled transmission in a cross bar switch. Speculative utilization. modes of operation reduced latency at low

Scheduled modes throughput.

of

operation

achieve

high

maximum

Advantage:

The speculative transmission that does not have to wait for grant hence low latency. The scheduled transmission achieve high maximum throughput.

System Requirements: Hardware:


PROCESSOR RAM MONITOR HARD DISK CDDRIVE KEYBOARD MOUSE : PENTIUM IV 2.6 GHz : 512 MB DD RAM : 15 COLOR : : 20 GB STANDARD 102 KEYS : LG 52X : 3 BUTTONS

Software:
Front End Tools Used Operating System Back End : Java, Swing : JFrameBuilder : Windows XP : SQL Server 2000

Modules: Constructing Network Packet Creation Forwarded Input to Centralized Scheduler Apply centralized matching algorithm Receive the packets & Performance calculation

Module Description: Module-1: In this module, we are going to connect the network .Each node is connected the neighboring node and it is independently deployed in network area. Module-2: In this module, browse and select the source file. And selected data is converted into fixed size of packets. Module-3: Speculative transmission scheme to significantly reduce the average control-path Latency by allowing cells to proceed without waiting for a grant no need to ask request from destination. So we are up to reduce the 50 percent of time delay. In this module, the fixed size of packets forwarded from source to Centralized Scheduler. The centralized scheduler achieves high maximum utilization. Module-4:

Here, we are going to apply the centralized matching algorithm based upon the port no and read the port no from source and compare with the scheduler then its forward to the destination. Module-5: In this module, received the valid packets from scheduler and calculate the over all time delay from source to destination. Our analysis and simulation results both confirm that this scheme achieves a significant latency reduction of up to 50% at traffic loads.

Das könnte Ihnen auch gefallen