Sie sind auf Seite 1von 23

Plagiarism Detector v.

986 - Originality Report:


Analyzed document: 09/03/2017 7:53:38 PM

"updated-NEW THESIS NEETA (1)-1 n-1.docx"


Licensed to: Originality report generated by unregistered Demo version!

Warning: Demo Version - reports are


incomplete!

To get full version, please order the


software:

Relation chart:

Distribution graph:

Comparison Preset: Word-to-Word. Detected language: English


Top sources of plagiarism:
%4 wrds: 796 Demo mode! Please register!
% 0.8 wrds: 231 Demo mode! Please register!
% 0.4 wrds: 108 Demo mode! Please register!

[Show other Sources:]

Processed resources details:


96 - Ok / 16 - Failed
[Show other Sources:]

Important notes:
Wikipedia: Google Books: Ghostwriting services: Anti-cheating:
Wiki Detected! [not detected] [not detected] [not detected]

Excluded Urls:

Included Urls:

Detailed document analysis:


Section ID: 1
Plagiarism detected: 0.09% ofdocument
this
is Type: regular
located in: Size: 22 words
[click the link to activate Side-by-Side compare]
Demo mode! Please register!
[click to show more resources]
ABSTRACT
WiMax (IEEE 802.16) belongs to a family of wireless communication for broadband wireless access (BWA)
network with advancement in technology.

Multicast and broadcast service of transportable Wireless is the fastest growing technology in the past few years,
because it provides the multimedia content to big range users in a cost efficient method. The high speed Wireless
networks have completed it possible to provide real time multimedia streaming. The requirement for portable
Multimedia streams has been increasing in the earlier period few years. Multimedia streams can be transported to
mobile mechanisms over a variety of WiMax networks, include 3G, Wi-Fi etc. The major purpose is to execute
video streaming over WiMax. Video buffering is the major problem occurs during online playing so an improved
method is used to enhance the QOS of video which doesn't loss the video quality after compression of video's
size. The purposes approved out for the multicast video streaming WiMax is the system concern to carry out its
sub stream configuration of video stream. Performance estimate comparison for the existing process is analyzed
and its relative assertion is tabularized. The original thought is intended for improving performance metrics such
as packet delivery ratio, throughput, average delay, energy and frame lost ratio to manage Quality of Service i
n WiMax networks using NS-2.35. Multicast routing protocol PUMA, RTP, RTCP is used to attain scalability in the
network. The Wireless patch of NIST forum
Section ID: 2
Plagiarism detected: 0.02% ofdocument
this
is Type: regular
located in: Size: 6 words
[click the link to activate Side-by-Side compare]
Demo mode! Please register!
[click to show more resources]
is used to carry out the

[Incomplete report data! Please purchase the software to get complete Report!]

Warning: Demo Version - reports are incomplete!

To get full version, please order the software:

simulations.Keywords: - WiMax, video streaming, NP-Complete, MPEG, HD, NS-2.. Table of Content Abstract..
..............................011. Introduction...........................141. 1 Introduction...........................14-161. 2 The IEEE 802.16
NS-2 Modules...................16-171. 3 Motivation...........................171. 4 Problem Statement.......................171. 5 Aim
and Objective.......................17-181. 6 Scope and Limitation......................181. 7 Thesis
Organization........................182. Literatur e Review.........................192.1 Literatur e Review.........................20-262.2
An overview of WiMax (IEEE 802.16) ................26-282.3 WiMax Ne twork Ac.......................283. Scalable video
..........................293.1 Scalable video ...............................303.2 Types of Scalability..........................30-
313.2.1Temporal Scalability .........................313.2.2 Spatial Scalability .........................313.2.3 Intra Frame Coding
Techniques ...................313.2.4 Non-Intra Frame Coding Techniques .................313.2.5 P Frames
Techniques ...................313.2.4 Non-Intra Frame Coding Techniques .................313.2.5 P Frames
............................313.2.6 B Frames ...........................31-323.3 Benefits of Scalable Video ....................33-34 3.4
Video Streaming ...........................343.5 Working of video streaming ....................34-363.6 Layered Video Stream
........................36-373.7 Packet Buffer ..........................37-383.7.1 NP-complete ............................383.8 Energy
Efficient Video Streaming .....................383.9 Applications of Video Streaming .....................39-404. Streaming
Architecture ........................414.1 Streaming Architecture ........................41-424.2 There are several methods of
streaming available .............42-434.3 Streaming Architecture .........................434.3.1 Content Preparation
........................444.3.2 Streaming Server ..........................444.4 IP Streaming Network ........................444.4.1 Media
Player ............................444.5 Streaming Protocols ..........................454.5.1 Routing protocol
..........................454.5.1.1 Real-Time Protocol (RTP) ....................454.5.1.2 Real-time control protocol (RTCP)
..................46-484.5.1.3PUMA ............................484.6 Streaming Media Distribution .....................494.6.1 Unicast
.............................494.6 .2 Multicast ...........................494.6.3 Broadcast ...........................494.7 Video Codecs and
Video Types .....................49 4.7.1 Codecs .............................49-504.7.2 MPEG .............................504.7.3 High
Definition (HD) .......................51-525. Simulation Environment ........................53Simulation
Environment........................53 Network Simulators..........................53-54NS-2 COMPONENTS
.........................55GloMoSim ............................56OPNET Modeler ..........................565.6 Why NS-2 is
better..........................575.7 Simulation Parameter.........................57Performance Matrices ...........................58-596.
Implementation and Result ........................606.1 Types of text ..............................616.2 Result Analysis
..........................62-666.3 Result of MP4 Video 34MB ........................66-706.4Result of MP4 Video 53MB
......................70-746.5 Result of MP4 Video 104MB......................74-786.6 Result of MP4 Video
400MB......................78-826.7 Result of MP4 Video 154MB......................82-866.8 Result of HD Video
353MB......................86-906 .9 Result of HD Video 172MB......................90-947. Conclusion ..............................957.
1 Conclusion.. .............................967.2 Future Work..............................96-978. Reference ................................98-
1019. List of Publication .............................102List of Figure Figur e 1 WiMax Network .........................15Figure 2.1
WiMax network deployment option.................27Figure 2.2 WiMax architecture....................... 28 Figure 3.1: Frame
Coding Techniques ..........................32Figure 3.2: Frame structure in WiMax .....................35Figure 3.3: One
overlay containing three layer parts ...............37Figure 3.4: Three Part Buffers........ ..................38Figure 3.5:
Streamline Media .. ........................39Figure 4.1: A typical streaming system infrastructure ..................42Figure 4.2:
Steps of streaming content preparation... ...............43Figure 4.3: RTP packet header ........................46Figure 4.4:
RTCP Message type .........................47Figure 4.5: PUMA ............................48Figure 5.1 some component explain
below .......................55Figure 6.2.1 PDR............................................................................................... 67Figure 6.2.2
Throughputs .........................68Figure 6.2.3 End to End Delay MP4 Video...................70Figure 6.2.4
Energy.............................71Figure 6.3.1 PDR MP4 Video 34MB......................100Figure 6.3.2 Throughput MP4
Video 34MB....................101Figure 6.3.3 End to End Delay MP4 Video 34MB................102Figure 6.3.4 Energy
MP4 Video 34MB.....................103Figure 6.4.1 PDR MP4 Video 53MB......................104 Figure 6.4.2 Throughput
MP4 Video 53MB....................105Figure 6.4.3 End to End Delay MP4 Video 53MB................107Figure 6.4.4
Energy MP4 Video 53MB.....................108Figure 6.5.1 PDR MP4 Video 104MB......................109Figure 6.5.2
Throughput MP4 Video 104MB....................110Figure 6.5.3 End to End Delay MP4 Video
104MB.................111Figure 6.5.4 Energy MP4 Video 104MB.....................112Figure 6.6.1 PDR MP4 Video
400MB......................73Figure 6. 6.2 Throughput MP4 Video 400MB....................74Figure 6.6.3 End to End Delay
MP4 Video 400MB...................76Figure 6.6.4 Energy MP4 Video
400MB........................................................................... 77Figure 6 .7.1 PDR MP4 Video 154MB......................78Figur
e 6.7.2 Throughput MP4 Video 154MB....................79Figure 6.7.3 End to End Delay MP4 Video
154MB.................80F igure 6.7.4 Energy MP4 Video 154MB......................82Figure 6.8.1 PDR HD Video
353MB.........................83Figure 6.8.2 Throughput HD Video 353MB....................85Figure 6.8.3 End to End Delay
HD Video 353MB...................86Figure 6.8.4 Energy HD Video 353MB........................87Figure 6.9.1 PDR HD
Video 172MB ........................88Figure 6.9.2 Throughput HD Video 172MB................... 89Figure 6.9.3 End to End
Delay HD Video 172MB .................90Figure 6.9.4 Energy HD Video 172MB......................91List of Table Table
5.5- Why NS-2 is better .................... 61Table 5.6- Simulation Parameter .........................62Table 6.2.1 PDR
.............................67Table 6.2.2 Throughputs .........................69Table 6. 2.3 End to End Delay MP4
Video....................70Table 6.2.4 Energy............................ 72Table 6.3.1 PDR MP4 Video
34MB......................100Table 6.3.2 Throughput MP4 Video 34MB....................101Table 6.3.3 End to End Delay
MP4 Video 34MB................102Table 6.3.4 Energy MP4 Video 34MB.....................103Table 6.4.1 PDR MP4
Video 53MB......................104 Table 6.4.2 Throughput MP4 Video 53MB....................105Table 6.4.3 End to End
Delay MP4 Video 53MB................107Table 6.4.4 Energy MP4 Video 53MB.....................108Table 6.5.1 PDR
MP4 Video 104MB......................109Table 6.5.2 Throughput MP4 Video 104MB....................110Table 6.5.3 End to
End Delay MP4 Video 104MB.................111Table 6.5.4 Energy MP4 Video 104MB.....................112Table 6.6.1
PDR MP4 Video 400MB......................73 Table 6.6.2 Throughput MP4 Video 400MB....................74 Table 6.6.3
End to End Delay MP4 Video 400MB...................76 Table 6 .6.4 Energy MP4 Video
400MB........................................................................... 77 Table 6.7.1 PDR MP4 Video 154MB......................78 Table
6.7.2 Throughput MP4 Video 154MB....................79 Table 6.7.3 End to End Delay MP4 Video
154MB.................80 Table 6.7.4 Energy MP4 Video 154MB......................82 Table 6.8.1 PDR HD Video
353MB.........................83 Table 6.8.2 Throughput HD Video 353MB....................85 Table 6.8.3 End to End Delay
HD Video 353MB...................86 Table 6.8.4 Energy HD Video 353MB........................87 Table 6.9.1 PDR HD
Video 172MB........................88 Table 6.9.2 Throughput HD Video 172MB................... 89 Table 6.9.3 End to End
Delay HD Video 172MB.................90 Table 6.9 .4 Energy HD Video 172MB......................Abbreviates A AODV-
Delay HD Video 172MB.................90 Table 6.9 .4 Energy HD Video 172MB......................Abbreviates A AODV-
Ad-hoc on Demand Distance Vector Routing AOMDV- Adhoc On-demand Multipath Distance Vector ADSL -
Asymmetric Digital Subscriber LineAVC- Advanced Video Coding AMC - Adaptive Modulation & Coding ARQ -
Automatic Repeat RequestAPP- Application-precise message AP - Access Points B BS - Base Station BE -
Best Effort Service BWA - Broadband Wireless Access BYE- Good Bye C CNAME- Canonical Finish-point
Identifiers CODEC- compresses and decompressD DVB-H-Digital Video Broadcasting - Handheld DCT-
Discrete Cosine Transform E ESG- Electronic Service Guide ErtPS- Extended real time polling service Vice ED
- Error Detection F FDD - Frequency Division Duplexing FTP - File Transfer Protocol FED - Forward Error
Detection FEC - Forward Error Correction H HD- High Definition I IEEE - Institute of Electrical & Electronic
Engineer IEs - Information Element ISPs -Internet Service Provider IDFT- Inverse Discrete Fourier TransformL
LL- Link LayerM MAC - Medium Access Control MSs - Multiple Subscriber Station MBS - Multicast and
Broadcast service MPEG - Moving Picture Expert Group MCS - Modulation Coding Scheme N NS-2 - Network
Simulator 2 NrtPS - Non Real Time Polling ServiceNIST -National Institute of Standards and Technology O
OFDM - Orthogonal Frequency Division Multiple Access OTCL - Object Oriented Tool Command Languages O-
DRR - Opportunistic Deficit Round Robin OPNET - Optimum Network Performance P PHY - Physical Layer
PDR - Packet Delivery Ratio PHS - Payload Header Suppression PDUs - Protocol Data Units PMP - Point to
Multipoint PUSC - Partially Used Sub Channelization Q QoS - Quality of Services R RtPS - Real Time Polling
Service RLC - Radio Link Control RTP- Real time Protocol RTCP-Real time Control Protocol REMP- Reliable
Efficient Multicast Protocol RR- Receiver RecordS SS - Subscriber Station SAP - Service Access Point SFID -
Service Flow Identifier SDU - Service Data Unit SFID- Service Flow identifier S-REMP -Scalable Reliable
Efficient Multicast protocol T TCL - Tool Command Language TLV - Type Length Value TDD - Time Division
Duplexing TCP - Transmission Control Protocol TGA - Traffic Generation Agent TDMA - Time Division
Multiplexing U UL- MAP - Uplink Map UGS - Unsolicited Grant Service UDP - User Datagram Protocol V VoIP -
Voice over IP (Internet Protocol) VBR - Variable Bit Rate W WIMAX - Worldwide Interoperability for Microwave
Access WRR - Weighted Round Robin WI-FI- Wireless FidelityCHAPTER 1 INTRODUCTION CHAPTER 1
INTRODUCTION IntroductionThe internet is worldwide system for inter connection of computer system using
internet protocol. The internet carriers an extensive range of information resources and services such a
documentation, videos, e-mail, telephony and file sharing. There are different source on internet connection su
ch as with cables, Bluetooth, Wi-Fi and WiMax. WiMax is used to provide WiMax internet access for subscribers,
they support various network services. One of these multicast and broadcast service which can be used for
multimedia traffic deliver to large- scale user communities. WiMax can provide wide area coverage and quality
of service capabilities for applications ranging from real-time delay-sensitive voice-over-IP (VoIP) to real-time
streaming video and non-real-time downloads, make sure that subscribers get the performance they expect for
all types of communications [1].The idea of working in this field generated by studying some past research
papers. S. Sharangi et.al [2] gives the best and suitable area for selection of my work. WiMax can help service
providers meet many of the challenges they face due to increasing customer demands without discarding their
existing infrastructure investments because it has the ability to interoperate across various network types. In the
WiMax physical layer, transmitted the data using multiple carriers in TDD (Time Division Duplex) frames.
Header information contains each frame and upload/download observed by means of bursts of user facts.
Because video broad casting is expected to be a regular traffic pattern in destiny networks, the WiMax known
defines a carrier referred to as MBS (Multicast and Broadcast Service) in the MAC layer to facilitate broadcast
and multicast. Using MBS, a positive area in each TDD body may be set aside for multicast-only or broadcast-
handiest information that the whole frame can also be specific as a down load-most effective broadcast body. A
main venture of the MBS module is to allocate video information from multiple streams to the MBS statistics
place in every frame such that the actual-time nature of all video streams is maintained. similarly, the allocation
set of rules need to keep in mind that the receiver devices have confined buffer ability which may also purpose
statistics loss because of buffer overflow. The constraint imposes inflexible QOS and efficiency needs at the
allocation. Figure 1 .1: WiMax Network [3]The IEEE 802.16 standard defines five QoS classes: Unsolicited
Grant Service (UGS), real- time Polling Service (rtPS), non-real-time Polling Service (nrtPS), and Best Effort
(BE).The IEEE 802.16e amendment added a fifth QoS class, called extended real-time Polling Service (ertPS).
The five defined QoS classes are described as follows [4]:UGS (Unsolicited Grant Service) supports real-time
service flows that have fixed-size data packets on a periodic basis. The BS provides grants in unsolicited
manner. The UGS subscribers are prohibited from using contention request opportunities. RtPS (Real Time
Polling Service) supports real-time service flows that have variable size data packets on a periodic basis. The
BS periodically provides unicast request opportunities in order to allow the SS to specify the desired bandwidth
allocation. The SS is prohibited from using contention request opportunities. nrtPS (Non Real Time Polling
Service) is designed to support non real-time service flows that have variable size data packets on a periodic
basis. The SS can use contention request opportunities to send a bandwidth request with contention. The SS
can also provide unicast request opportunities BE (Best Effect Service) is used for best effort traffic where no
throughput or delay guarantees are provided. The SS can use unicast request opportunities as well as
contention request opportunities. When the BS or the SS creates a connection, it links the connection with a
service [5]. A service flow provides unidirectional transport of packets either to uplink packets that are
transmitted by the Subscriber Station (SS) or to downlink packets that are transmitted by the Base Station (BS).
It is characterized by a set of parameters as a Service Flow identifier (SFID), service class name (UGS, rtPS,
ertPS, nrtPS, or BE) and QoS parameters (such as Maximum maintained traffic rate, minimum reserved traffic
rate and maximum latency).1.2 The IEEE 802.16 NS-2 Modules The buildup 802.16-based WiMax module
rate and maximum latency).1.2 The IEEE 802.16 NS-2 Modules The buildup 802.16-based WiMax module
named when the Mac (802 16) group is in accordance among the specifications of the IEEE 802.16-2004
standard and supported on the ns-2 version 2.35. All components are proposed by using object oriented
programming language OTCL and C++ and representation as several classes. The association between the
WiMax module and ns-2 components is based on the unique network component stack of the ns-2. It shows the
kind of objects on behalf of the traffic generating agent (TGA), the link layer (LL), the interface queue (IFQ), the
designed MAC layer (WiMax module), and the PHY layer (Channel).Primary , the TGA is thinker simply as an
application level traffic generator that generates VoIP, MPEG, FTP, HTTP traffic, and so on. The traffic is
classified into five different varieties of service: the UGS, rtPS, ertPS, nrtPS, and BE each with its own priority
[6]. All packets will be transferred to different types of priority queues according to their service types by using
CS layer SFID-CID mapping mechanism. The data packets in these queues are treated as MSDUs and will be
selected to pass into the WiMax module in a round robin manner while the WiMax module in the SS receives
the MSDUs from the Queue object. The MAC management component will initiate the ranging process to enter
the WiMax system or to transmit the MSDUs according to the scheduled time obtained from UL-MAP. Once the
process has been successfully finished in the MAC layer, the Network Interface [7] will add a propagation delay
and broadcast in the air interface. Worldwide Interoperability for Microwave Access (WiMax) is based on 802.16
standards and its amendment 802.16e. It is a Broadband Wireless Access (BWA) technology that promises a
large coverage and high throughput. Theoretically, the coverage range [8] can reach 30 miles and the
throughput can achieve 75 Mbit/s. Yet, in practice the maximum coverage range observed is about 20 km and
the data throughput can reach 9 Mbit/s using User Datagram Protocol (UDP) and 5 Mbit/s is using File Transfer
Protocol (FTP) over Transmission Control Protocol (TCP). The network simulation presents a solution to test the
performance of technologies. Network Simulator 2 (NS-2) is a widely used tool to simulate wireless
networks.Motivation Computer networks become more important due to the rapid growth of the Internet into our
daily lives. We can observe the gradual deployment of new multimedia applications such as the World Wide
Web, e-mail, video conferencing, video-on-demand, instant messaging, and Voice over IP (VoIP). These
applications generate traffic with characteristics that differ from traffic generated by data applications, and they
are more sensitive to delay and loss. As parts of the Internet become heavily loaded, congestion may occur
which may lead to buffer overflows and packet loss. It may also lead to packet delay as packets take longer to
process. Latency may seem acceptable for some applications such as e-mail and file transfer. For real-time
applications, data becomes obsolete if it does not arrive in time. Problem Statement The proposed work will
result better efficiency and lower packet loss over the WiMax network. In this proposed work, I am offering an
approach to share the available bandwidth in such a way that it will give the better efficiency and throughput
without changing the infrastructure or the routing algorithm. Aim and objective Aims and objectives of this thesis
work are summarized as follow: The study focuses on analysis of WiMax and its consequences. Analyzing the
effects of WiMax in the various Network load, packet delivery ratio, Throughput , end-to-end delay and residual
energy over WiMaxSimulating the WiMax using real time routing protocols such us PUMA, RTCP and RTP.
Comparing the results of RTP, RTCP and PUMA protocols to analyze which protocol are more susceptible to
WiMax.1.6 Scope and limitationThe scope of WiMax deployment will broaden to cover markets with poor copper
quality which have acted as a brake on extensive high-speed Internet and voice over broadband [8]. WiMax will
reach its peak by making Portable Internet a reality. When WiMax chipsets are integrated into laptops and other
portable devices, it will provide high-speed data services on the move, extending today's limited coverage of
public WLAN to metropolitan areas. Integrated into new generation networks with seamless roaming between
various accesses, it will enable end-users to enjoy an "Always Best Connected" experience. The combination of
these capabilities makes WiMax attractive for a wide diversity of people: fixed operators, mobile operators and
wireless ISPs (Internet Service Provider), but also for many vertical markets and local authorities. Alcatel, the
worldwide broadband market leader with a market share in excess of 37%, is committed to offer complete
support across the entire investment and operational cycle required for successful deployment of WiMax
services, with limitation. The WiMax BWA is well suited to provide the reliability and speed for meeting the
requirements of small and medium size businesses in low density environments. One disadvantage of WiMax is
the spectral limitation, in other words limitation of wireless bandwidth. For use in high density areas, it is
possible that the bandwidth may not be sufficient to cater to the needs of a large clientele, driving the costs
high.1.7 Thesis Organization The rest of the thesis is organized as follows: In chapter 2 , Describe a brief
description and literature review of WiMax. In chapter 3, state about routing protocols of WiMax. Chapter 4,
Describes Video Streaming and problems associated with it and also MPEG and HD. Chapter 5, Describe
about the simulation environment and performance matrices. Chapter 6, Shows implementation and result work.
Chapter 7, Describes conclusion and future work. CHAPTER 2 LITERATURE REVIEW CHAPTER 2
LITERATURE REVIEW 2.1 Literature Review Shelly Kalra et.al [9 ] For the multicast and broadcast service of
mobile, WiMax is the fastest growing technology in the past few years, because it provides the multimedia
content to large scale users in a cost efficient manner. The high speed WiMax networks have made it possible to
provide real time multimedia streaming. The paper shows an exhaustive survey of recent work addressing
multicasting of video streams over WiMax networks. The research was done by using different coding
techniques used for scalable video streaming and various algorithms used for multicasting of video streams over
WiMax has. It can be easily concluded that multicasting of video streams over WiMax can be carried out by
many ways. All the algorithms and coding techniques used for multicasting and for its quality of service works in
real time applications. Multicasting of video streams over WiMax can be very useful in future
applications.Sheraz Maki Mohd Ahmed et.al [10 ] the paper focuses on studying video streaming over WiMax.
applications.Sheraz Maki Mohd Ahmed et.al [10 ] the paper focuses on studying video streaming over WiMax.
The multicast/broadcast service (MBS) is a feature provided by WiMax technology and works under MAC layer,
which provides connection oriented and quality of service support. The author mentioned that the streaming
video over an MBS is more efficient in terms of resource management by focusing on a certain area and
ensuring high bit rate that results in a higher quality service. The paper also focuses on video streaming
architecture. It has also presented PUMA in WiMax and multicast/broadcast service (MBS). Gopikrishnan.R et.al
[11 ] The paper implements a new protocol in the project REMP (Reliable Efficient Multicast Protocol). To
overcome the above problems this REMP is mainly suggested for MAC level Multicast protocol for increasing
reliability and efficiency. The efficient is satisfied by the adjustment of the MCS (Modulation coding scheme) and
reliability is satisfied selective for invalid multicast protocol. The author also represents some additional work by
implementing the another protocol S-REMP (Scalable Reliable Efficient Multicast protocol) is for delivery of
minimal quality video to all user and higher video quality is provided to the users exhibiting better channel
conditions. It also shows the MAC-level multicast protocol named REMP that enhances the reliability and
efficiency of multicast transmissions in IEEE 802.11n WLANs. In REMP, AP selectively retransmits erroneous
multicast frames and dynamically adjusts MCS under varying channel conditions based on the advanced
feedback mechanism from multicast receivers. The researcher represents a very valuable and effective
work.Bilal Ahmed et.al [12] shows whether WiMax technology can allow the network performance comparable
with that of ADSL for different applications, especially video applications, encoded by MPEG-x codec's. OPNET
modeler is used to simulate the idea and four parameters, delay, packet loss, jitter and throughput, are observed
to compare the results with ADSL. Two hour video is used for simulation and three subscribers at different
distances It modeled for WiMax network. One subscriber for ADSL network. A simulation result shows that
ADSL performance was ideally good. WiMax performance also promising within the boundaries of defined
limits. Initially packet loss rate was very high in WiMax the problem was overcome was by fine tuning and re-
configuration. Unicast traffic is used to model the video streams in the simulation. Multicast traffic provides better
result.Jaswant Kumar Joshi et.al [13 ] the paper based analyses the performance of four different routing
protocols namely ZRP, AODV, AOMDV, and DDIFF for the improvement of the quality of streamed video in
Mobile Ad-hoc Network. Researcher's uses throughput, average end-to-end delay and packet delivery fraction
(PDF) with respect to varying pause time to analyze a video streaming quality over used routing protocols on
MANET. It also shows some analysis of routing protocols namely Zone Routing Protocol (ZRP), Ad-hoc On-
demand Distance Vector (AODV), Ad-hoc On-demand Multipath Distance Vector (AOMDV), and Ditched
diffusion (DDIFF) protocols. Results of the final analysis was compared on the basis of their performances for
video streaming data by considering different Quality of Service (QoS) performance metrics such as average
throughput, average end-to-end delay and packet delivery fraction (PDF). The paper concluded that the overall
performance of DDIFF and ZRP is better in term of packet delivery fraction as well as average end-to-end delay
among other used protocols. While, in term of average throughput AODV and DDIFF has produced better results
with compare to others.M. Imran Tariq et.al [14 ] the paper evaluates the performance of different VoIP codes
over the best effort WiMax network. The network performance metrics such as jitter, one way delay, and packet
loss and user perception metric that is Mean Opinion Score have been used to evaluate the performance of
VoIP codes. The simulation is performed in QualNet simulator with varying values of Packet size, number of
calls and jitter buffer sizes. The results indicate that varying the jitter, buffer size and packetization time, affects
the quality of voice over the best effort network. The VoIP have better voice quality with higher MOS under no
RTP jitter buffer as compared with the RTP jitter Buffer. Packet loss ratio is also noted twice time losses instead
with jitter buffer. The works also evaluate various values for the packet stay time in jitter buffer and best one was
used in our experiments. The paper conclude that the RTP jitter buffer has a significant effect on the overall
performance of the VoIP application especially in the best effort scheduling class due to its list priority. K.
Sakthisudhan et.al [15 ] according to the paper intellectual mobile terminals (or users) of next generation WiMax
networks are expected to initiate/Establish voice over IP (VoIP) calls using session set-up protocols like H.323
or SIP (Session Initialized Protocols. The author analyzes the performance of the H.323 call setup procedure
over the WiMax link. The proposed model, application layers in the RTP Control Protocol and Real-Time
Transport Control Protocol (RTCP) is used in two different modes of call establishment. Initiate services through
VoIP used for H.323 control packets. Their analytical model provides that the VoIP call set-up performance, jitter
and delay in peer to peer networks. The author also concluded that the call setup performance can be improved
significantly using the robust in application link layer such as RTP/RCTP with a comparison of heterogeneous
network proposed. The analytical results are validated by our experimental measurements.Chandra. R et.al [16 ]
the main idea of the paper was to propose for improving performance metrics such as good put, average delay
and frame lost ratio to attain Quality of Service in WiMax networks. In this work the key issues involved in the
multicasting of video stream over WiMax is carry out. The mathematical solution is analyzed for selecting the
optimal sub streams of scalable video streams under bandwidth constraints to maximize the quality for mobile
receivers. Swarna Parvathi.S et.al [1 7] the paper explains multicasting of scalable video streams over WiMax
networks. The main goal is to perform video streaming over WiMax networks. Multicast routing protocol PUMA is
used to achieve scalability in the network. PUMA achieves deist packet delivery ratio with variable number of
nodes. It proposes an experimental setup for simulation study to multicast those selected sub streams to Mobile
Stations (MS) via WiMax Base Station (BS). The WiMax patch of NIST forum is used to carry out the
simulations. The PSNR was compared with the MOS values. The author also concluded that SVC performs
better in WiMax networks than WLAN. Devising video-aware BS scheduling algorithms is a promising subject
for further investigation.Somsubhra Sharangi et.al [1 8] the author focuses on WiMax networks that transmit
for further investigation.Somsubhra Sharangi et.al [1 8] the author focuses on WiMax networks that transmit
multiple video streams encoded in scalable manner to mobile receivers. The author also focuses on two
research problems: first one is maximizing the video quality and second one is minimizing energy consumption
for mobile receivers. They also solve the sub stream selection problem to maximize the video quality, which
arises when multiple scalable video streams are broad cast to mobile receivers with limited resources. They
mentioned that this problem is NP-Complete, and design a polynomial time approximation algorithm to solve it.
The researcher also provide algorithm to reduce the energy consumption of mobile receivers. The paper
concluded that the approximation factor of the proposed algorithm is very close to one for practical scenarios.G.
Sasi et.al [19 ] the author developed a trust based security protocol which attains confidentiality and
authentication of packets in both routing and link layers of MANET. WiMax networking provides numerous
opportunities to increase productivity and cut costs. It also alters an organization's overall computer security risk
profile. The author also mentioned that it is impossible to totally eliminate all risks associated with WiMax
networking; it is possible to achieve a reasonable level of overall security by adopting a systematic approach to
assessing and managing risk. This paper discussed here the threats and vulnerabilities associated with each of
the three basic technology components of WiMax networks (clients, access points, and the transmission
medium) and described various commonly available countermeasures that could be used to mitigate those
risks. It also stressed the importance of training and educating users in safe WiMax networking
procedures.Cheng-Hsin Hsu et. al [20] The author presents general framework for optimizing the quality of video
streaming in WiMax networks that are composed of multiple WiMax stations. The framework is important in
many ways like (i) it can be applied to different WiMax networks, such as WiMax, (ii) it can employ different
objective functions for the optimization, and (iii) it can adopt various models for the WiMax channel, the link
layer, and the distortion of the video streams in the application layer. The optimization framework controls
parameters in different layers to optimally allocate the WiMax network resources among all stations [21]. Our
experimental and simulation results show that significant quality improvement in video streams can be achieved
using our solution, without incurring any significant communication or computational overhead.Shamik
Sengupta et.al [22] the author describes flexible features offered at the medium access control (MAC) layer of
WiMax for construction and transmission of MAC protocol data units (MPDU) for supporting multiple VoIP
streams. The study mainly consists of the quality of VoIP calls, usually given by R-score, with respect to delay
and loss of packets. They also observe that loss is more sensitive than delay hence they compromise the delay
performance within acceptable limits in order to achieve a packet loss rate. Through a combination of
techniques like Forward Error Correction, Automatic Repeat Request, MPDU Aggregation, and mini slot
allocation, a balance is considered between the deist delay and loss. Simulation experiments are conducted to
test the performance of the proposed mechanisms. The main work done here is with the help of three-state
Markovian channel model and study the performance with and without retransmissions[10]. The research also
shows that the feedback-based technique coupled with retransmissions, aggregation, and variable length
MPDUs are effective and increases the R-score and mean opinion score by about 40%.Heiko Schwarz et.al [23
] This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. The basic main
tools used for the analysis are temporal, spatial, and quality scalability are described in detail and
experimentally analyzed regarding their efficiency and complexity. The possibility to employ hierarchical
prediction structures for providing temporal scalability with several layers while improving the coding efficiency
and increasing the effectiveness of quality and spatial scalable coding. They show their results and analysis by
comparing the images of both the simulation which shows some main factors like new methods for inter-layer
prediction of motion and residual improving the coding efficiency of spatial scalable and quality scalable coding.
The concept of key pictures for efficiently controlling the drift for packet-based quality scalable coding with
hierarchical prediction structures. Single motion compensation loop decoding for spatial and quality scalable
coding providing a decoder complexity close to that of single-layer coding. The support of a modified decoding
process that allows a lossless and low-complexity rewriting of a quality scalable bit stream into a bit stream that
conforms to a non scalable H.264/AVC profile.Thomas Schierl et.al [24 ] the article shows a multisource
streaming approach is presented to increase the robustness of real-time video transmission in MANETs. The
analysis in this paper was done with the help of video coding as well as channel coding techniques on the
application layer, exploiting the multisource representation of the transferred media. Source coding is based on
the scalable video coding (SVC) extension of H.264/MPEG4-AVC with different layers for assigning importance
for transmission. Channel coding is based on a novel unequal packet loss protection (UPLP) scheme, which is
based on Raptor forward error correction (FEC) codes. While in the presented approach, the reception of a
single stream guarantees base quality only, the combined reception enables playback of video at full quality
and/or error rates. Furthermore, an application layer protocol is introduced for supporting peer-to-peer based
multisource streaming in MANETs. Meng Guo et.al [25 ] the author develops a scheme using the transmission of
a single-description coded video over an application layer multicast tree formed by cooperative clients. Video
continuity is maintained in spite of tree disruption caused by departing clients using a combination of two
techniques named 1) providing time-shifted streams at the server and allowing clients that suffer service
disconnection to join a video channel of the time-shifted stream, and 2) using video patching to allow a client to
catch up with the progress of a video program. Simulation experiments demonstrate that our design can achieve
uninterrupted service, while not compromising the video quality, at moderate cost. The conclusion of research
shows different features like 1) lossless video reception, 2) stable video quality, 3) Continuous video streaming,
4) Compared to Coop Net's MDC-based system and, 5) Moderate complexity. John G. Apostolopoulos et.al [26 ]
4) Compared to Coop Net's MDC-based system and, 5) Moderate complexity. John G. Apostolopoulos et.al [26 ]
the article examines the challenges that make simultaneous delivery and playback, or streaming, of video
difficult, and explores algorithms and systems that enable streaming of pre-encoded or live video over packet
networks such as the Internet. I continue by providing a brief overview of the diverse range of video streaming
and communication applications. Understanding the different classes of video applications is important, as they
provide different sets of constraints and degrees of freedom in system design. The work reviews video
compression and video compression standards with identifies the three fundamental challenges in video
streaming: unknown and time-varying bandwidth, delay jitter, and loss. These fundamental problems and
approaches for overcoming them are examined. Standardized media streaming protocols are also described
here and additional issues in video streaming are highlighted. 2.2 An Overview of WiMax (IEEE802.16)WiMax ,
officially termed as IEEE 802.16 standard is the state of the art technology developed for wireless metropolitan
area networks (WMANs). While the 802.11 standard was designed for local area wireless networks, the 802.16
standard was designed to address wireless metropolitan area networks. This technology offers several
advantages such as longer range of up to 30 miles and high data rates up to 70Mbps [1]. Salient features of
IEEE 802.16 include adaptive modulation scheme from 64-QAM to QPSK, OFDM technology, directional
antennas and transmit and receive diversity. Some of the major MAC layer improvements incorporated in this
technology are connection oriented protocol, QoS based packet scheduling and differentiated services based
on traffic requirements [27]. The primary function of WiMax is to offer last-mile wireless broadband services. But,
the 802.16 network can also be designed to efficiently serve as a backbone for 802.11 access points (AP) for
connecting to the Internet. The advantages of WiMax over other popular technologies such as Wi-Fi [28].There
are two types of deployment of WiMax networks - point to multipoint and mesh network topology as shown in
Figure 2.2. The most common type is point-to-multipoint where every communication has to be routed through
the base station. In such networks, transmission scheduling is easier. The mesh and point-to-multipoint
topologies are compared. The remainder of this report is based on WiMax networks working in point-to-
multipoint topology.These standards are described as follows: 802.16a :- This amendment specifies non line-of-
sight extensions in the 2-11 GHz spectrum and supports data rates up to 70 Mbps and range up to 20 miles.
802.16d: - This amendment was developed to address backhaul applications. The standard for discusses on the
11 to 66 GHz unlicensed spectrum. 802.16e:- This amendment enables support for combined exempt and
mobile operation for licensed and license exempt frequencies below 11 GH Figure 2.1 : WiMax network
deployment options [29]2 .3 WiMax Network ArchitectureThe architecture of a sensor node is shown in Figure
2.3. A WiMax node consists of four major components: Sensing unit Processing unit Transceiver unit Power unit
A sensor may also have a global positioning system (GPS) and mobilize for localization and static
respectively.Figure 2.2 WiMax architecture [29]Network generates analog signal of sensed data, which is
converted to digital signal by the analog-to-digital converter (ADC), and is transmitted to the processing unit. The
processing unit has an embedded micro-controller that performs the computing job. Transceiver unit is
responsible for data transmission. Power unit manages the power supply to all other components. Chapter 3 SC
ALABLE VIDEOChapter 3 SC ALABLE VIDEO3.1 Scalable videoThe term "scalable" in the context of video
stands for the general concept of coding an image sequence in a progressive manner. Meaning, the internal
structure of the coded video allows for a trade-off between bit rate and subjective quality. The additional
flexibility is provided if parts of the video bit stream can be discarded with the result still representing a valid
video sequence. This requires a layered structure within the coded video that distinguishes basic information
from parts that represent only details. In this way, a video can be adjusted in a fast and easy way to changing
network conditions or the specific capabilities of the end users. With this concept in mind, scalable video can
also be co mpared to progressive JPEG in the still image domain. Progressive JPEG offers the possibility to
transmit the low frequency parts of the image first, giving a preliminary impression of what the final image would
look like. All following higher frequency information builds upon the first version and accumulates finer details.
3.2 Types of Scalability3.2.1 Temporal Scalability Scaling a video via its temporal resolution basically means
altering its frame rate. Simply discarding random frames from the video sequence is not feasible, because other
frames may depend upon them for motion compensation. So, in order to provide the choice between several
frame rates, the frames must be encoded in a certain manner - also called hierarchical prediction structure. The
numbers underneath the frames indicate their ordering within the coded bit stream, where motion prediction is
still conducted in a hierarchical fashion, while avoiding a structural delay in the decoding process of the
sequence, also visible by the lack of backward prediction and the steadily increasing encoding order of the
frames. Basically the concept of different temporal resolutions can just as well be achieved through pure
H.264/AVC. The advanced flexibility of H.264/AVC for choosing and controlling reference frames already
enables the use of hierarchical motion prediction.3.2.2 Spatial Scalability While the previous paragraph dealt
with temporal resolution, spatial scalability corresponds to different image resolutions. Similar to image
pyramids, every new layer within a spatial scalable bit stream improves the final image resolution. Over
simulcasting each layer separately, is marked in this graphic by the vertical arrows connecting two layers. They
illustrate the concept called inter-layer prediction. This prediction method strives to reuse as much information
as possible from one layer to the next. This avoids redundancy between the layers and subsequently improves
coding efficiency. Similar to motion prediction within one 14 layer, in the case of inter-layer prediction first the
final image is predicted from the corresponding picture in the reference layer and only the differences to the
actual image (also called residuals) are finally encoded. The most efficient way to perform inter-layer prediction
would be to depend on the completely reconstructed or decoded picture from the layer. This straight forward
method, however, would significantly increase the complexity of the decoder, due to the requirement of fully
method, however, would significantly increase the complexity of the decoder, due to the requirement of fully
decoding all underlying layers. Though single loop motion compensation slightly decreases coding efficiency, it
significantly simplifies the structure of the decoder [59], [60]. Hence, the following three inter-layer prediction
techniques recycle information from low level layers without entirely decoding them.3.2.3 Intra Frame Coding
TechniquesThe word intra coding refers to the actuality that the different lossless and lossy compression
techniques are executed that is included only within the current frame and not virtual to any other frame in the
video sequence. In further statements, no temporal processing is performed outside of the current picture or
frame. This mode will be described first because it is simpler, and because non-intra coding techniques are
extensions to these basics. The basic processing blocks shown are the video filter, discrete cosine transform,
DCT coefficient quantizer, and variable length coder. 3.2.4 Non-Intra Frame Coding TechniquesThe earlier
discussed intra frame coding techniques were maximized to processing the video signal on a spatial basis,
effective only to information within the current video frame. Considerably further compression efficiency can be
attained though, if the inherent temporal, or time-based redundancies, are broken as well. Anyone who has ever
taken a stumble of the old-style super-8 movie pictures and held it up to a light can confidently remember seeing
that most successive frames within a sequence are very parallel to the frames both previous to and later than the
frame of interest. Temporal processing to utilize this redundancy applies a technique known as block-based
motion compensated prediction, utilizing motion estimation.3.2.5 P FramesInitiating with an intra, or I frame, the
encoder can forward predic t a upcoming frame. This is generally referred to as a P frame, and it may also be
expected from other P frames, although only in a forward time approach. As like think about a group of pictures
that lasts for 6 frames. In this folder, the frame ordering is given as I,P,P,P,P,P,I,P,P,P,P,.Every P frame in this
sequence is expected from the frame instantly preceding it, whether it is a n I frame or a P frame. As remember, I
frames are coded spatially with no location to any other frame in the sequence.MPEG Display order Forward
prediction of P frames Forward prediction of B frames Backward prediction of B frames Figure 3.1: Frame
Coding Techniques [51]3.2.6 B FramesThe encoder also has the choice of using forward/backward interpolated
expectation. These frames are generally referred to as bi-directional interpolated expectation frames, or B
frames for short. As like of the handling of I, P, and B frames, consider a group of pictures that lasts for 6 frames,
and is provided as I,B,P,B,P,B,I,B,P,B,P,B,. As in the earlier I & P only like, I frames are coded spatially just and
the P frames are forward expected based on earlier I and P frames. The B frames however, are coded based on
a forward expectation from earlier I or P frame, as well as a backward expectation from a succeeding I or P
frame. As such, sequence is processed by the encoder such that the first B frame is expected from the first I
frame and first P frame, the second B frame is expected from the second and third P frames, and the third B
frame is expected from the third P frame and the first I frame then group of pictures.The major benefit of the
usage of B frames is coding efficiency. In most cases, B frames will outcome in less bits being coded overall.
Quality can also be developed in the case of moving entities that disclose hidden areas within a video
sequence. Backward expectation in this case allows the encoder to create extra intelligent decisions on how to
encode the video within these parts. Also, since B frames are not used to expect upcoming frames, errors
generated will not be transmitted further within the sequence. One drawback is that the frame restoration
memory buffers within the encoder and decoder must be doubled in size to contain the two secure frames.
Another drawback is that there will essentially be a delay throughout the method as the frames are delivered out
of order .3.3 Benefits of Scalable Video Previous to any further discussion about relevant aspects of encoding a
video in a scalable manner, the motivation behind this idea should be outlined. Broadly speaking, scalable in
comparison to non-scalable videos provides the following advantages. In case of simulcasting, several different
versions of the same video must be available in order to serve diverse user requirements. Obviously those
different versions bear a high degree of redundancy. Although encoded for different bit rates, they all represent
the same content. Scalable video strives to reduce this redundancy and can therefore produce a video stream
that requires significantly less storage space than the sum over all versions of a simulcast video stream. In
addition, scalable video streams can be encoded in a way to offer more than a limited number of diff erent bit
rate points. With this fine graduation the choice among bit rates is expanded to a whole range of possible
values. The management of different bit rate versions for the same video is avoided. With scalable video just
one bit stream can serve a diversity of client needs. As a consequence the adjustment of the bit rate is
simplified. It no longer involves switching between two separate bit streams, but can be carried out within the
same video stream. This convenience improves the flexibility of the video stream and increases the resilience
against variations or failures of the transmission link. In representing the encoded video in a layered structure,
scalable video assigns different importance to each layer. The base layer of a scalable video stream 11
comprises information that is fundamental for the playback of the video. Thus, it represents the most important
parts of the video stream. As the order of the enhancement layers on top increases their importance decreases.
The advantage of this layered structure is that specific parts of the encoded video stream can be prioritized. In a
network environment with limited band width capacity this prioritization enables to prefer those data packets that
are essential for the playback of the video. Hence, in cases were not enough band width is available to receive
the whole video, scalable video offers at least a low quality version of that video. Especially for peer-to-peer
networks scalable video offers another advantage that is related to the previous one. Generally, all peers of a
network do not form a homogenous group, but differ in their bandwidth or computing capacities. Thus, in case of
simulcast, they would demand different bit rates of the video and consequently also request different streams.
This leads to the problem that the overlay of the peer-to-peer network is breaking apart into smaller sub-groups.
Each subgroup shares only one version (i.e. bit rate) of the video stream. The peers are grouped according to
the requested bit rate (indicated by shades of gray). In case of simulcast the bit rates are represented by three
the requested bit rate (indicated by shades of gray). In case of simulcast the bit rates are represented by three
different streams. Therefore, peers with different bit rates cannot exchange data packets and form separate
subgroups. This separation of peers becomes in particular problematic in cases of STARK band width
fluctuations. If the available band width changes, peers may react to the new situation by requesting a different
bit rate of the current video stream. In case of a simulcasted video streams, the only way to do so is to request a
different stream. Since the old and the new stream are independent from each other, switching between them
requires leaving one group of peers and joining another one. Therefore, the whole neighbor structure has to be
rebuilt. Scalable video overcomes this complex switching task by offering one stream that can serve a variety of
bit rates. Hence, all peers stay within the same group, regardless of the requested bit rate. This simplifies
switching between bit rates and consequently improves the robustness of the system. In addition, the robustness
of the peer-to-peer network further benefits from another advantage of wide group of peers. The performance of
a peer-to-peer network relies on a profound number of peers that are willing to share their upload capacity. If
more peers offer the same video stream the robustness of the whole network is improved, because a failure of
one peer can very easily be compensated by other peers. However, it is important to notice that the increased
flexibility of scalable video also produces a certain amount of data overhead that reduces the coding efficiency
of the video stream. 3.4 Video StreamingStreaming multimedia permits the person to begin viewing video clips
stored on the server, without first downloading the complete file. After a sudden duration of initializing and
buffering the document will begin to exchange. Streaming video is normally sent from pre recorded video files,
but may be disbursed as a part of a live broadcast "feed". In a stay broadcast the video signal is converted into a
compressed virtual sign and transmitted from a unique internet server. This is capable to do multicast sending
the same report to multiple users to the same time. Buffering is ones PC downloading the video faster than it
performs [36]. 3 .5 Working of video streaming A video streaming over Wireless network is collected of three
main entities Source content Wireless Base Station Wireless Subscribers Station. Source contents are national
TV broadcasters; local broadcaster, internet TV operations and previous video transmit provision sources.
Multimedia contents are aggregate from different sources and sent to the WiMax base station. The Wireless
base station creates a plan to broadcast the incoming data to the subscribers. In the WiMax physical level, data
is transmitting above multiple carriers in Time Division Duplex (TDD) frames. Time P r e a m b l e FCH DL # 1
DL# 2 Map MBS MBS Data Area UL # 1 UL # 2 DL Map DL# 3 DL # 4 UL Map DL Sub frame UL Sub
frameFigure 3. 2: Frame structure in WiMax [40].Each frame contains description information and uploads follow
Id by burst of user information. Since video broadcasting is expected to be an established transfer pattern, the
Wireless standard defines a service called Multicast and Broadcast Service. To facilitate broadcast and
multicast in the MAC layer Using MBS, a certain area in each TDD frame. The entity frame can also be design
as a download-only broadcast frame. The problem of selecting optimal sub streams of scalable video streams
under bandwidth constraints. Solving this problem is significant because it enable the network operative to
transmit higher quality video or more number of video streams at the same capacity. Combined a group of TDD
frames together into a super-frame. Three user interaction models are considered. User can either be statically
hooked to a channel. User can choose to listen to a channel The user channel association can keep changing
based on the transmission intermediate condition but we do not consider the delay requirements which are
central to video streaming. To transmit scalable video streams in which two layers of each video are transmitted
separately. The base layer is transmitted as one stream over a reliable channel while the enhancement layer is
transmitted as a different stream over a less reliable channel. This work implements a rate adaptive multiple
description coding. However, it describes only one stream and it does not address the resource management
problem arising in multi stream transmission scenario.Considered splitting a video stream into two streams and
transmit them over two similar broadcast networks. The first stream is transmitting over DVB-H networks at all
time while the second stream is transmitting over WiMax networks mainly of the time. If the user decided to use
various other non-video purposes in equivalent, the stream going away through WiMax is degraded to
accommodate that application. This ensures a minimum video quality all the times while maintaining the
flexibility of using other applications. A burst scheduling algorithm for energy minimization on per subscriber
basis for unicast data. The algorithm arranges the mobile subscribers in ascending order based on the ratio of
the current data arrival rate to the required data rate. If the current rate is significantly higher than the requited
rate, the mobile subscriber can go to sleep for some interval. After computing the sleep interval for all mobile
subscribers the bursts are scheduled in a longest interval first manner. After transmission of each burst, the
algorithm checks to ensure that the data requirements of all mobile subscribers are being satisfied. The work in
is designed for unicast streaming of video and does not consider the multicast. Also the algorithm required
maintaining state information of all mobile subscribers serve by a base station Suggest a scheduling scheme
where the uni-cast data is clustered around the multicast data burst. They assumed that the burst length and
positions for a exacting stream is the same in all super-frames. The work in is designed for unicast streaming of
video and does not consider the multicast. Also the algorithm required maintaining state in sequence of all
mobile subscribers serve by a base station. Then they present an enhancement to the longest virtual buffer first
scheduling algorithm proposed. By clustering the unicast data around the multicast data bursts [41]. Their work
evaluates the energy efficiency in a multi-class traffic scenario; whereas our work is centre on the energy
efficiency of the video transmit provision. 3.6 Layered Video Stream In addition to the protocol aspects the
internal layered structure of a scalable video stream bears further challenges for Pulsar in comparison to single
layer video streams (i.e. H.264/AVC). For the remaining of this report the term "frame" refers to a set of layers
that together form a complete picture of the final video sequence. Parts Pulsar employs for each non-scalable
video stream exactly one overlay structure and consequently also only one set of strategies for requesting and
video stream exactly one overlay structure and consequently also only one set of strategies for requesting and
notifying packets. This concept proves too rigid for a scalable bit stream containing different layers, since it
would be favorable to adjust those strategies for each layer individually. Thus, the concept of an overlay
structure was expanded to cope with several layers. Now, each overlay can be comprised of several so called
parts. Each part is responsible for distributing one layer. An individual set of strategies can be assigned to each
part, which allows for specific strategies targeted at each layer. Although all parts of a scalable video stream
employ their own set of strategies, they are all united under the same overlay structure. Accordingly, all parts
rely on a unique list of neighbors as well as on a unique strategy to update them. This fact ensures that the
overall overlay stays connected and does not break apart into smaller sub-group for each layer.The concept of
parts can further be generalized by the fact that their application is not restricted to layers of a scalable video
stream. Also other data types that are related to the scalable video stream can be conveyed via parts. This is
especially important for Meta data such as general information about the stream or security information. Figure
3.3: One overlay containing three layer parts 3.7 Packet BufferOn the receiver side Pulsar collects all data
packets of a non-scalable video stream in a buffer. There all incoming packets are sorted; missing packets are
requested and after a specified time period the available packets are finally handed over to a buffer reader. 24
Those buffer readers are responsible for providing an interface for fetching available packets from the buffer.
The general concept of buffers and buffer readers is outlined for three layers. Due to the modifications of overlay
parts the buffer for incoming SVC packets has to be modified as well. Similar to the concept of parts, each layer
handles incoming packets in its individual buffer. Thus, it remains the task of combining those buffers to form a
single output stream. This is realized by an additional buffer reader, which functions as a wrapper around all
layer buffers and merges their output to a single output stream. If one may download the video and play the
video at the same time, it will prevent and create a buffer among the play times and down load time. This
buffering problem is solving through NP entity. Figure 3.3: Three Part Buffers Figure 3.4: Three Part Buffers
3.7.1 NP-completeNP-complete problems are in https://en.wikipedia.org/wiki/NP_(complexity) NP, the set of all
https://en.wikipedia.org/wiki/Decision_problem decision problems whose solutions can be verified in polynomial
time; NP may be equivalently defined as the set of decision problems that can be solved in polynomial time on a
https://en.wikipedia.org/wiki/Non-deterministic_Turing_machine non-deterministic Turing machine. A problem p
in NP is NP-complete if every other problem in NP can be transformed (or reduced) into p in polynomial
time.NP-complete problems are studied because the ability to quickly verify solutions to a problem (NP) seems
to correlate with the ability to quickly solve that problem ( https://en.wikipedia.org/wiki/P_(complexity) P). It is not
known whether every problem in NP can be quickly solved this is called the
https://en.wikipedia.org/wiki/P_versus_NP_problem P versus NP problem. But if any NP-complete problem can
be solved quickly, then every problem in NP can, because the definition of an NP-complete problem states that
every problem in NP must be quickly reducible to every NP-complete problem (that is, it can be reduced in
polynomial time). Because of this, it is often said that NP-complete problems are harder or more difficult than NP
problems in general.A decision problem is NP-complete if: C is in NP Every problem in NP is
https://en.wikipedia.org/wiki/Many-one_reduction reducible to C in polynomial time.C can be shown to be in NP
by demonstrating that a candidate solution. 3.8 Energy Efficient Video StreamingLet c onsider a scenario where
the stream data transmission can be scheduled to provide better energy efficiency at mobile subscribers.
Significant research studies have been dedicated for energy management at mobile subscribers utilizing the
sleep mode feature. When the device is idle, sleep mode will be activated so as to minimize the energy
consumption by using the minimal energy to maintain the running system. However, frequent switching from
sleep mode to normal mode can result in excessive energy consumption. If a mobile station switches back and
forth regardless of the amount of data to be received, unnecessary energy could be wasted. This effect is more
severe when the mobile subscriber is watching a streaming video because video decoding and screen lighting
already consume a lot of energy. A scheme is proposed to reduce the energy consumption by minimizing the
switching frequency of the receiver while still maintaining QoS requirements for streaming multimedia [ ]. The
Average Energy Efficiency (AEE) metric which is defined as the ratio of energy consumption due to data transfer
to the total energy consumption by a receiver is considered.3.9 Applications of Video Streaming 1. Video
compression: - Raw video must be compressed before transmission to acquit performance. Video compression
schemes may be labeled into two categories: scalable and non-scalable video coding. Scalable video is
capable of gracefully handling the bandwidth fluctuations inside the internet2. Utility-layer QoS control: - To
provide scope with various network conditions and different presentation quality requested by the users, various
application -layer QoS manipulate techniques have been proposed. The application-layer strategies include
congestion control and error control. Their respective capabilities are as follows, Congestion control is
employed to save you packet loss and reduce postpone. Blunders manage, alternatively, is to improve video
presentation first-class inside the presence of packet loss. Mistakes control mechanisms include ahead
mistakes correction (FEC), retransmission, and errors-resilient encoding and errors concealment.3. Non-stop
media distribution services: - To be able to offer fine multimedia presentations, adequate internet support is
crucial. That is due to the fact network guide can lesser delivery postpone and packet loss ratio. Constructed on
the top of the internet (IP protocol), continuous media distribution services are capable of achieve QoS and
efficiency for streaming video/audio over the exceptional attempt of internet. Non-stop media distribution
offerings consist of network changing, software-level multicast, and content replication. 4. Streaming servers: -
Streaming servers play a key function in providing streaming services. To off quality streaming offerings,
streaming servers are required to process multimedia information base timing constraints and aid interactive
control operations together with pause/resume, fast forward and rapid backward. Moreover, streaming servers
control operations together with pause/resume, fast forward and rapid backward. Moreover, streaming servers
need to retrieve media additives in a synchronous style. A streaming server generally consists of thee
subsystems, specifically, a communicator (e.g., shipping protocols), a operating system, and a storage
system.Figure 3.5: Streamline Media 5 Media synchronization mechanisms: - media synchronization is a main
characteristic that distinguishes multimedia programs from other traditional statistics applications. With media
synchronization mechanisms, the application at the receiver side can present numerous media streams within
the identical manner way as they originally captured. An example of media synchronization is that the actions of
a speaker's mouth match the played out audio. CHAPTER-4 STREAMING ARCHITECTURE CHAPTER-4
STREAMING ARCHITECTURE 4.1 Streaming ArchitectureStreaming is the method of transmitting media as a
continuous stream of data that can be processed by the receiving computer before the entire file has been
completely sent. Streaming video is a content sent in a compressed form over the Internet and displayed by the
viewer in real time. With streaming video or streaming media, a Web user does not have to wait to download a
file to play it. Instead, the media is sent in a continuous stream of data and is played as it arrives. The user
needs a player, which is a special program that decompresses and sends video data to the display and audio
data to speakers. A player can be either an integral part of a browser or specialized software. Streaming video is
usually sent from prerecorded video files, but can be also distributed as part of a live broadcast. In a live
broadcast, the video signal is converted into a compressed digital signal and transmitted from a special Web
server that is able to do multicast, sending the same file to multiple users at the same time. Figure 4.1: A typical
streaming system infrastructure 4.2 There are several methods of streaming available True streaming: the video
signal arrives in real time and is displayed to the viewer immediately. Downloading: the entire file is saved on
the computer, usually in a temporary folder, which then can be opened and viewed. For example, the files to be
viewed are small but this is not a convenient way for viewing large files, as the user needs to wait for the whole
file to be downloaded before it can be viewed. Also, very large files are not playable in real time. Progressive
downloading: the video clip is broken up into small files, each of which is downloaded to the user's device
during playback, which begins playing as soon as a portion of the file has been received. This simulates true
streaming, but does not have all the advantages. Progressive downloading is used mostly for delivering Flash
video over the web as is the case for YouTube. 4.3 Streaming Architecture A streaming media file needs to go
through several steps so that all the information can be delivered. The first step in streaming is to shoot the raw
audio and video and then capture them to the computer file format. The next step is to encode the captured
video in a specific format such as Windows Media Streaming, QuickTime, Real Networks Real Video, etc.
Encoding is a crucial part in streaming preparation; it is here where the appropriate bit rate can be set keeping in
mind whether the audience has the necessary hardware and software, and more importantly, the connection
speed to support the streaming. To deliver the encoded file to the network, it needs to be uploaded to a
streaming server. Unlike a web server, the streaming server controls the stream delivery in real time, handles the
load in an efficient way and increases the performance. A wide range of multimedia streaming servers are
available in the market. Many protocols, as described later in this study, can be used for delivering multimedia
content. The transport protocols packetize the compressed bit streams and send the video/audio packets over a
LAN or to the Internet. The packets that are successfully delivered to the receiver first pass through the transport
layers and 5 are then processed by the application layer before being decoded in the video/audio decoder. 4.3.1
Content Preparation Due to high bitrates and large space consumption, raw video content is not suitable for
streaming applications. The content needs to be processed before finally published. Figure 4.2 : Steps of
streaming content preparationContent can come from a variety of different sources: video cameras, prerecorded
tapes and DVDs, downloaded video clips, and others. In many editing systems, the content is all converted into
a common format before processing takes place. 4.3.2 Streaming Server The streaming server is responsible for
distributing media streams to viewers. It takes media content that has been stored internally and creates a
stream for each viewer request. These streams can be either unicast or multicast and can be controlled by a
variety of mechanisms. 4 .4 IP Streaming Network When video is being transported over an IP network, users
need to consider a number of factors such as multiplexing, traffic shaping, buffering and firewalls as Capture
Edit Process Compress Label and index Publish 6 these can significantly affect the end user's viewing
experience. Thus, these factors should be taken into account when planning a network. 4.4.1 Media Player
Player software that resides on the viewer's PC is responsible for accepting the incoming stream and converting
it into a displayed image. Apart from just playing the media, the most intensive job of the player software is to
decompress the incoming signal and create an image for display. All the player does is buffer the data packets,
making sure they are in the correct order and then unpack the data packets, decompressing the digital payload
[5]. The amount of processing required varies depending on the size of the image and on the compression
method. The player makes sure the data continues to stream from the source to the target client, if continuity is
interrupted the player takes corrective action such as pausing, repeating frames, or re -buffering. Players may
also request data to be resent. 4 .5 Streaming Protocols There are many protocols that have been developed to
facilitate real-time streaming of multimedia content. Protocols perform and important role in communication
without which number node will be able to communicate with each other.4.5.1 Routing protocolA routing
protocol specifies how routers communicate with each other, distributing Information that enables them to select
routes between any two https://en.wikipedia.org/wiki/Node_(networking) nodes on a
https://en.wikipedia.org/wiki/Computer_network computer network. A routing protocol shares this information first
among immediate neighbors, and then throughout the network. Real time routing protocols are used in IP for
real time traffic. Real time routing discovers an optimum route from source to destination which meets the real
real time traffic. Real time routing discovers an optimum route from source to destination which meets the real
time constraints. Timely and reliable data delivery is very important for positive results as out-dated data may
lead to disaster effects.Designing a network protocol to support streaming media raises many issues. Datagram
protocols, consisting of the User Datagram Protocol (UDP), send the small packets of media circulate as a
series. This is simple and efficient; but, there's no mechanism in the protocol to guarantee delivery. It's miles as
much as the receiving software to locate loss or corruption and recover facts the usage of errors correction
techniques. If records are lost, the movement may additionally suffer a dropout. The real-time transport Protocol
(RTP) and the real-time control Protocol (RTCP) have been in particular designed to move media over
networks.Another technique that seems to incorporate both the advantages of the use of a preferred internet
protocol and the potential for use for streaming [30] even stay content is adaptive bit rate streaming. HTTP
adaptive bit rate streaming is based on HTTP innovative download, but opposite to the previous technique, right
here the documents are very small, so they can be compared to the streaming of packets. Reliable protocols,
which include the Transmission control Protocol (TCP), assure correct transport of each bit inside the media
circulation. But, they accomplish this with a machine of timeouts and retries, which makes them extra
complicated to implement. It also approach that after there's information loss on the community, the media
stream stalls even as the protocol handlers detect the loss and retransmit the missing statistics. Clients can
minimize this impact by way of buffering statistics for display. While delay due to buffering is appropriate in video
on demand scenarios, customers of interactive packages together with video conferencing will experience a
loss of constancy if the postpone caused by buffering exceeds 200 micro seconds.Unicast protocols transfer a
separate copy of the media flow from the server to each recipient. Unicast is the norm for maximum net
connections, however does no longer scale properly while many users need to view the equal television
program concurrently. Multicast protocols had been evolved to lessen the server/network loads attributable to
copy records streams that occur when many recipients acquit unicast content streams independently. Those
protocols transfer a single movement from the source to a collection of recipients. 4.5.1.1 Real-Time Protocol
(RTP) The Real-Time Protocol (RTP) is a transport protocol that provides end-to-end network transport functions
for applications transmitting data with real-time properties, such as interactive audio and video. Services that
use RTP include payload type identification, sequence numbering, time stamping and delivery monitoring. The
most important thing RTP does is time stamping that allows placing the incoming audio and video packets in the
correct timing order. Applications run RTP on top of the User Datagram Protocol (UDP). RTP includes RTCP, a
closely linked protocol, to provide a mechanism for reporting feedback on the transmitted real-time data. RTP
can be used in the following scenarios: multicast audio conferencing as well as audio and video conferencing.
The protocol has been demonstrated to scale from point-to-point use to multicast sessions with thousands of
users, and from low-bandwidth cellular telephony applications to the delivery of uncompressed High-Definition
Television (HDTV) signals at gigabit rates [6]. RTP is one of the technical foundations of Voice over IP and on
this framework is frequently used in conjunction with a signaling protocol such as the session Initiation Protocol
(SIP) which establishes connections throughout the community [32].RTP is designed for end-to-end, real-time,
switch of streaming media. The protocol affords facilities for jitter reparation and finding out of series arrival in
records that are not unusual during transmissions on an IP community. RTP allows information switch to a pair
of locations through IP multicast. RTP is appeared as the primary standard for audio/video delivery in IP
networks and is used with a connected information and payload layout. Actual-time multimedia streaming
applications requite timely transport of facts and regularly can tolerate a few packet losses to collect this
purpose. For instance, loss of a packet in audio software may bring about lack of a partition of a second of audio
facts, which may be made invisible with suitable mistakes suppression algorithms. The Transmission control
Protocol (TCP), although standardized for RTP use, is not generally used in RTP packages because TCP
favors reliability over timeliness. Figure 4.3: RTP packet header 4.5 .1.2 Real-time control protocol (RTCP)The
Real-Time Control Protocol (RTCP) is a data transport protocol used in conjunction with RTP for transporting
real-time media streams. It includes functions to support synchronization between different media types (e.g.,
audio and video) and to provide information to streaming applications about network quality, number of viewers,
identity of viewers, etc. RTCP gives feedback to each participant in an RTP session. The primary function of
RTCP is to provide comments on the Quality of Service (QoS) in media distribution by periodically sending
information data to participants in a streaming multimedia session this feedback can be used to control
performance. The messages include reception reports, including number of packets lost and jitter statistics. A
service may use these records to control quality of provider parameters, possibly by way of excluding go with the
flow, or using a one of a kind codec. Such information may be utilized by the source for adaptive media
encoding (codec) and detection of transmission faults. If the session is carried over a multicast community, this
permits non-intrusive session satisfactory tracking. This information can potentially be used by higher layer
applications to modify the transmission. Some RTCP messages relate to control of a video conference with
multiple participants.1 . Sender document (SR)The sender record is transmitted periodically by way of the active
senders in a calculate to record transmission and reception information for all RTP packets sent in the course of
the c- program language. The sender report consists of an absolute timestamp. Absolutely the timestamp allows
the receiver to synchronize RTP messages. It is far in particular necessary when both audio and video are
transmitted simultaneously, because audio and video streams use independent relative timestamps. MN
KJ;;;RTCP Figure 4.4: RTCP Message type [34] Receiver Record (RR) The receiver file is for proactive
contributors, the ones that do not send RTP packets. The record informs the sender and other receivers about
the fine of service. Source description (SDES) The source Description message is used to send the CNAME
item to discussion participants. It can also be used to offer additional information along with the name, e mail
item to discussion participants. It can also be used to offer additional information along with the name, e mail
deal with, cell phone wide variety, and address of the owner or controller of the supply. Good-Bye (BYE)
Although the fact that other resources can detect the absence of a source, this message is a detect statement. It
is also beneficial to a media mixer. Application-precise message (APP) The utility-precise message affords a
mechanism to design application-precise extensions to the RTCP protocol. PUMA PUMA helps any source to
send multicast packets addressed to a given multicast organization. PUMA does not want some other unicast
routing protocol due to the fact it is able to act as unicast protocol. Channel Manager: the role of the channel
manager is to assign available channels to wireless links to satisfy a performance goal Layer Control Plane,
Data Plane, Constraint Solver, Channel Manager, Declarative Networking Engine Routing Protocols, Channel
Selection Protocol, Constraints and Goals, Forwarding Agent Network, Network Layer Status. Figure 4.5: PUMA
The channel manager takes as additional input network status information, which includes network topology
and the set of channels available to each node. Declarative Networking Engine: At the network layer, the Rapid
Net declarative networking engine is deployed within the control plane to implement a variety of neighbor
discovery and routing protocols. Each PUMA node runs a number of multi-channel wireless radio devices
(interfaces). Typically, the first interface operates on the common control channel (CCC), reserved solely for
routing and channel selection protocol messages. 4.6 Streaming Media DistributionThere are three common
techniques for streaming real-time audio and video over a network: unicasting, multicasting and broadcasting. 4
.6.1 Unicast A unicast stream is a one-to-one connection between the server and a client, which means that
each client receives a distinct stream and only those clients that request the stream receive it. In other words, in
unicasting each video stream is sent to exactly one recipient. If multiple recipients want the same video, the
source must create a separate unicast stream for each recipient. These streams then flow all the way from the
source to each destination over the IP network. 4.3.2 Multicast In multicasting, a single video stream is delivered
simultaneously to multiple users. Through the use of special protocols, the network is directed to make copies of
the video stream for every recipient. This process of copying occurs inside the network rather than at the video
source. Copies are made at each point in the network only where they are needed. The full range of IPv4
multicast addresses is from 224.0.0.0 to 239.255.255.255. 4.6 .3 Broadcast In broadcasting a single packet is
sent to every device on the local network. Each device that receives a broadcast packet must process the packet
in case there is a message for the device. Broadcast packets should not be used for streaming media, since
even a small stream could flood every device on the local network with packets that are not of interest to the
device. Broadcast packets are usually not propagated by routers from one local network to another, making
them undesirable for streaming applications. In true IP multicasting, the packets are sent only to the devices that
specifically request to receive them, by joining the multicast. 4 .7 Video Codecs and Video Types 4. 7.1 Codecs
Overview Streaming video and audio signals over an IP network need to be compressed in most cases, which
means reducing the amount of bits that need to be transported while keeping the quality as good as possible.
Compression is the process in which the amount of data used to send the video and audio is reduced to meet
the bit rate requirements. In general, a codec is a software that encodes and decodes (compress and
decompress) the video streams. High level encoder architecture [12] The Moving Pictures Experts Group has
developed some of the most common compression systems for video around the world and given these
standards the common names MPEG-1, MPEG-2, and MPEG-4. 3.2 Video Compression using H.264 Codec
H.264/MPEG4-AVC is the video coding standard of ITU-T Video Coding Experts Group (VCEG) and the
ISO/IEC Moving Picture Experts Group (MPEG). H.264 has been adopted by the Motion Picture Experts Group
(MPEG) to be a key video compression scheme in the MPEG-4 format for digital media exchange. It is also
known as MPEG-4 Part 10 and MPEG-4 AVC (Advanced Video Coding). H.264 delivers the same quality as
MPEG-2 at a third to half the data rate, and when compared to MPEG-4 Part 2, H.264 provides up to four times
the frame size at a given data rate [13].4.7.2 MPEGMPEG stands for " Moving Picture Experts Group." The
MPEG organization, which works among the International Organization for Standardization (ISO), and IEC the
(International Electro technical Commission), develops standards designed for digital audio and video
compression. The set frequently works to expand more efficient methods to digitally compress and maintain
audio and video documents [46]. This is why various movies at the internet, which includes movie trailers and
song motion pictures, exist within the MPEG format.MPEG video compression is used into various current and
rising products. It is the heart of digi tal television set-top boxes, DVD players, HDTV decoders, Internet, video,
video conferencing and further applications. These applications help from video compression in the fact that
they can need less storage space for archived video information, less bandwidth for the transmission of the
video information from one end to another, or a combination of together. Moreover the detail that it works well in
a broad range of applications, [47] a big part of its recognition is that it is defined in two finalized international
standards, with a third standard presently in the description process [48]. MPEG-1 Coding of moving pictures
and linked audio for digital storage space media at capable of on 1.5 Mbit/s . MPEG-1 compression usually used
for https://en.wikipedia.org/wiki/Audio_compression_(data) audio also
https://en.wikipedia.org/wiki/Video_compression video. It was mainly designed to permit moving pictures and
sound toward exist encoded. It was used in cable TV services before MPEG-2 became extensive. To gather the
low bit necessity, MPEG-1 images the https://en.wikipedia.org/wiki/Downsample down samples , over and
above uses picture rates of only 24-30 Hz, answering in a sensible quality. It contains the popular MPEG-1
Audio motion picture (https://en.wikipedia.org/wiki/MP3 MP3) audio compression
system.https://en.wikipedia.org/wiki/MPEG-2 MPEG-2 MPEG-2 is refereed important because it has been
chosen as the compression method for over-the-air https://en.wikipedia.org/wiki/Digital_television digital
television https://en.wikipedia.org/wiki/ATSC_Standards ATSC,
television https://en.wikipedia.org/wiki/ATSC_Standards ATSC,
https://en.wikipedia.org/wiki/Digital_Video_Broadcasting ISDB and https://en.wikipedia.org/wiki/ISDB DVB,
digital TV services like https://en.wikipedia.org/wiki/Dish_Network Dish Network, digital
https://en.wikipedia.org/wiki/Cable_television cable television signals and
https://en.wikipedia.org/wiki/DVD_Video DVD Video. It is moreover utilized on https://en.wikipedia.org/wiki/Blu-
ray_Disc Blu-ray Discs, but these usually use MPEG-4 Part 10 or high-definition
substance.https://en.wikipedia.org/wiki/MPEG-3 MPEG-3MPEG-3 connection with multi-resolution compression
and scalable standization and was recommend for HDTV compression but was establish to be unneeded and
was combined with MPEG-2; as answer there is no MPEG-3 standard. This is MPEG-1 or MPEG-2 Audio Layer
III.https://en.wikipedia.org/wiki/MPEG-4 MPEG-4 A MPEG-4 is use coding of audio-visual objects . MPEG-4
utilizes extra coding tools with additional complexity to get higher compression aspects than MPEG-2. In adding
to other efficient coding of video, MPEG-4 moves about closer to computer graphics functions. In other difficult
profiles, the MPEG-4 decoder effectively becomes a description processor and the compressed bit stream
explains three-dimensional shapes and plane surface. Normalized
https://en.wikipedia.org/wiki/Digital_Rights_Management Digital Rights Management signaling, then recognized
in the MPEG district as Intellectual Property Management and Protection
(IPMP)https://en.wikipedia.org/wiki/MPEG-4_Part_2 MPEG-4 Part 2 is Simple and Advanced Simple Profile and
MPEG-4 Part 10 or H.264. MPEG-4 AVC can be utilized on https://en.wikipedia.org/wiki/HD_DVD HD DVD [50]
and https://en.wikipedia.org/wiki/Blu-ray_Disc Blu-ray Discs, beside with https://en.wikipedia.org/wiki/VC-1 VC-
1 also MPEG-2.4.7.3 High Definition (HD)High definition video offers much more detail in video images
because it uses many more pixels than standard video. This allows much larger video displays to be used
without the loss of sharpness and high quality. HD signals use a different aspect ratio for the video image. There
are two common forms of high definition video known as 1080i and 720p, which have several variations based
on the frame rate of the video stream. Some options for each include 25 fps (frames per second), 60 fps, or 24
fps is also available and is the same frame rate used for film production. However, and due to the low frame rate,
24 fps does not work well with images involving fast motion . Today's high definition televisions have vertical
display resolutions of either 720 or 1,080 lines! Such high resolution is what gives HD videos its sharpness and
eye-popping realism. To watch HD video, you need both a HD source and a HD monitor. HD sources transmit
video shot with HD cameras and include television broadcasts (antennas, cable or satellite), Blu-ray Discs,
video game consoles and computer/Internet video sources. HD monitors usually come in the form of a wide
variety of HD televisions. However, most computer monitors are also capable of displaying HD video. So, if you
don't have an HDTV, you can usually still enjoy HD video on your computer.There are more and more 'Full HD'
screens (capable of displaying 1080p) appearing. A 1080p screen can de-interlace a 1080i signal. With very
few 1080p sources available, the main benefit of a Full HD screen is its ability to map a source such as Sky TV
(1080i) pixel for pixel to the screens resolution (i.e. 1920 x 1080). HDTV uses 16:9 widescreen as is its aspect
ratio so widescreen pictures are transmitted properly and not letterboxed or panned. Digital multichannel sound
can be broadcast as part of an HDTV signal, so if you have a surround sound speaker set-up you can use it to
listen to TV rather than just DVDs. To receive an HDTV broadcast you need either a TV with a built-in HDTV
tuner or a HDTV receiver which can pick-up off the air HDTV channels, or cable or satellite HDTV like. You also
need to live in are where HDTV channels are broadcast or distributed by cable or satellite. HDTV broadcast
systems are identified with three major parameters: Frame size in pixels is defined as number of horizontal
pixels number of vertical pixels, for example 1280 720 or 1920 1080. Often the number of horizontal pixels
is implied from context and is omitted, as in the case of 720p and 1080p.Scanning system is identified with the
letter p for https://en.wikipedia.org/wiki/Progressive_scan progressive scanning or i for
https://en.wikipedia.org/wiki/Interlaced_video interlaced scanning.Frame rate is identified as number of video
frames per second. For interlaced systems, the number of frames per second should be specified, but it is not
uncommon to see the field rate incorrectly used insteadCHAPTER-5 SIMULATION ENVIRONMENT
CHAPTER 5 SIMULATION ENVIRONMENT Simulation EnvironmentIt is important to setup simulation
environment to analyses protocols behavior. Quantitative analysis is conducted with the help of NS-2 tool.5.2
Network SimulatorsNetwork simulator (Version 2), wide referred to as NS-2, is just a discrete event driven
network simulation tool for studying the dynamic nature of communication networks. It's an open sou rce solution
built in C++ and Otcl programming languages. Ns-2 provides extremely modular Platform for wired and wireless
simulations supporting totally different network component, protocol (e.g., routing algorithms, TCP, UDP, and
FTP), traffic, and routing types. In general, NS-2 provides users with the simplest way of specifying network
protocols and simulating their corresponding behaviors and result of the simulation is provided within a trace file
that contains all occurred events.To test new concepts researchers resort to one of two underlined techniques
i.e. either testing new concepts in real time environment or testing them in simulated environment. Creation of
real time environment might not be forever possible. In such cases it is need to depend on the simulation tools
[51]. Just in case of mobile ad-hoc networks it's been observed that around hour of work is completed using
simulation tools. Ns-2 is most generally used tool among various available simulation tools.An overview of how
a simulation is performed in NS-2 from the user input, within the OTCL script, to processing; the user creates
node movement and traffic generation files. A TCL script is used to bridge the OTCL script created by the user
with the C++ code resident within the NS-2 simulator to perform the simulation. The NS-2 simulator performs the
simulation and creates an output file configuration results of the simulation. The user will add the network
simulator (NS) to the TCL script to view the movement of node within the simulation. The output trace file is
parsed in the Perl script and therefore the results of the data process are analyzed in wireless network.Figure
parsed in the Perl script and therefore the results of the data process are analyzed in wireless network.Figure
5.1 Network Simulation [51]NS-2 is completely simulates a covered network from the physical radio
transmission channel to high-level applications. The NS-2 simulator was initially developed by the University of
California at Berkeley and VINT scheme the simulator was newly extended to provide simulation support for ad
hoc network by Carnegie Mellon University (CMU Monarch Project homepage, 1999). The NS-2 simulator has
so various features that create it suitable for our simulations. Network environment for ad-hoc networks, 2.
Wireless channel modules (e.g.802.11), 3. Routing along multiple paths, 4. Mobile hosts used for wireless
cellular networks. NS-2 is an object-oriented simulator; the simulator supports a class hierarchy in C++ and a
equal class hierarchy inside the Otcl interpreter. There is a one-to-one association between a class in the
interpreted hierarchy and one in the compile hierarchy. The motive to use two different programming languages
is that Otcl is fitting for the programs and configurations that demand everyday and fast change while C++ is
fitting for the programs that have high demand in speed. NS-2 is highly extensible. It not only supports mainly
used IP protocols but also allows the users to enlarge or implement their own protocols. It also provides
commanding trace functionalities, since a variety of information need to be logged for investigation. The full
source code of NS-2 can be downloaded [52] and compiled for several platforms such as UNIX and Windows.
Simulator other then NS-2 is as follows:5.3 NS-2 COMPONENTSNS- simulator NAM (Network Animator): visual
demonstration of NS output Pre-processing: handwritten TCL Post analysis: Trace analysis exploitation X-graph
NAM : NAM is AN animations tool for viewing network simulation trace and real world packet trace information.
NAM was design to scan simple animation event commands from a large trace file. Events commands are kept
in a file and read from the file whenever necessary. If we have a tendency to use NAM then initial we've to
supply the trace file. The trace file contains topology data e.g. nodes Links as well as packet trace. When NAM
is executed, it'll read the trace file, created topology, pop up a window do layout if necessary then pause at time
zero. Through its user interface it provides management over several aspect of animation.TCL : Tool command
language (TCL) may be a terribly very, open-source-licensed programming language from sun Microsystems. It
provides basic language features like variables, procedures, control etc, and runs on any modern OS. TCL may
be an originally intended to be a command line language. It's easy to construct quickly without regard to proper
design.TK : TK is an open source cross-platform widget toolkit, that's a library of basic parts for building a
graphical user interface (GUI).OTCL : Object orientated extension of TCL (OTCL) created by David. It's used in
network simulator (NS-2) and it usually run under UNIX environment. Here the keyword inst proc is used to
inherit the property of 1 class to a different. It helps us to reusability of C++/java. GloMoSim: GloMoSim is a
scalable simulation environment used for wired and wireless network systems. Presently it only supports
protocols for a simply wireless network. It is also built in a coated approach; such as OSI layer network
architecture. GloMoSim is calculated as a set of library modules, each one of which simulates a definite wireless
communication protocol in the protocol stack. The library has been developed using PARSEC, a C-based
parallel simulation language. New protocols and modules can be programmed and extra to the library using this
language. The newest edition of GloMoSim has implemented DSR. GloMoSim source and binary code can be
downloaded simply by academic institutions for research purposes. profitable users must use QualNet, the
commercial version of GloMoSim. OPNET Modeler: OPNET Modeler is commercial network simulation
environment on behalf of network modeling and simulation. It permits the users to design and learn
communication networks, devices, protocols, and applications with flexibility and scalability. It simulates the
network graphically and its graphical editors mirror the formation of real networks and network components. The
users can propose the network model visually. The modeler uses object-oriented modeling approach. The
nodes and protocols are modeled as program with inheritance and specialization. The development language is
C. Why NS-2:Free Open source Programming Language NS-2 Yes Yes C++, T CLGloMoSim Limited Yes
Parsec Opnet Modeler No No c Table 5.1: Comparison Table5.6 SIMULATION PARAMETERT his simulation
compares the performance of WiMax network of Data, MP4 Video and HD Video using parameters like Packet
Delivery Ratio, End to End delay, Residual Energy and Throughput with different number of nodes like 20,
60,100,150,200,250,300.Simulation tool versionNetwork simulator-2.35 IEEE scenario Wireless(802.11)
Mobility model Two ray ground Number of nodes 20,60,100,150,200,250,300 Node movement speed
10m/sec,28m/sec. Traffic type UDP Antenna Omni direction antenna MAC Layer IEEE 802.11 Routing Protocol
RTP,RTCP,PUMA Queue limit 50 packet Simulation area(in meter) 2000*2000 Queue type Drop-tail 5.7
Performance MetricsA performance metric s are the parameters which explain the behavior of any network. It
explains how a network behaves in certain environment. It is use to evaluate project activities and performance.
Before on telling these metrics, it is reminded that the research focuses only on data transmission, and the
metrics calculate their features with respect to data packets. The simulation parameters are as follows:Packet
Delivery Ratio (PDR)End to End Delay Residual Energy Throughput 5.7.1 Packet Delivery Ratio This is the
fraction of the data packets received by the destination to those sent by the source. This classifies the ability of
the protocol to discover routes. The greater value of packet delivery ratio means the better performance of the
protocol. Packet delivery ratio= Number of packet received / Number of packet send5.7 .2
ThroughputThroughput is defined as the number of packet flowing through the channel at a particular instant of
time. This performance metric signifies that the average rate at which the data packet is delivered successfully
from source node to destination node over a communication network is known as throughput. Throughput=
N/1000 Where N is the number of bits received successfully by all destinations. 5.7.3 End to End DelaysThis is
the average delay between the sending of the data packet by the source and its receipt at the corresponding
receiver. This includes all the delays caused during route acquisition, buffering and processing at intermediate
nodes. End to End Delay = Sum of the time spent to deliver packets for each destination Number of packets
nodes. End to End Delay = Sum of the time spent to deliver packets for each destination Number of packets
received by the all destination nodes 5 .7.4 Residual EnergyIt is the tot al amount of remaining energy of the
nodes after the completion of Communication or simulation. If a node is having 100% energy initially and having
70% energy after the simulation than the energy consumption by that node is 30%.The unit is in Joules.
CHAPTER-6 IMPLEMENTATION & RESULTS CHAPTER-6 IMPLEMENTATION & RESULTSWe have
implemented our work that is Creation of WiMax Scenario for NS-2 and then to create different routing protocols
with the use of different performance matrices like Packet Delivery Ratio, End to End delay, Residual Energy
and Throughput. In our case first we have created scenario file for IEEE 802.16 standard which is TCL script
consist of various routing protocols in our case these are RTP,RTCP and PUMA than a particular WiMax
scenario or topology with low to high node densities.There are two types of videosMP4 and HD videosThe
different sizes used in my thesis represents here in this section as explained below MP4 Video 34 MBMP4
Video 53 MBMP4 Video 104 MBMP4 Video 400MB MP4 Video 154MB HD Video 343MB HD Video 172MB 6 .2
Result Analysis6.2.1 Simulation of Packet Delivery Ratio Text data Figure 6.2.1: Results of PDRNo. of Node
PUMA RTCP RTP 20 0 0 0 60 0.7316 0.9136 0.9807 100 0.6292 0.8267 0.9715 150 0.5108 0.7417 0.5774 200
0.3144 0.4371 0.4209 250 0.2741 0.602 0.3978 300 0.3431 0.656 0.5018 Table 6. 2.1: PDRFigure.6 .2.1 shows
the graphical result of packet delivery ratio with different number of nodes by using three protocols which are
PUMA, RTCP and RTP. I also observed that when the implementation of RTP protocol was done by giving 20
numbers of nodes, the result of PDR is negligible (near zero).Then after when I increase the number of nodes up
to 60 I observed a constant relationship between PDR and number of nodes at 60, t the particular position the
maximum PDR value was also achieved. Between the ranges of 60 to 100 the value of RTP is constant and I
noted that the value of PUMA was continuously decreasing as with respect to increasing number of nodes.
Similarly at the range of 60 to 100 the value of PDR continuously decreases as by increasing number of nodes.
The minimum value of PDR calculated at the nodes of 250 as (RTCP=0.6, RTP=0.39 and PUMA= 0.27) this all
respective values are in decreasing order and the node of 300 shows the best result of RTCP than the
remaining other two protocols.6 .2.2 Simulation of Throughput Text dataFigure 6.2 .2: Results of ThroughputNo.
of Node PUMA RTCP RTP 20 0 0 0 60 674.56 683.03 604.36 100 673.67 677.59 702.41 150 831.67 678.25
687.11 200 440.67 233.36 683.33 250 421.79 548.22 433.7 300 901.09 923.22 908.08 Table 6.2 .2:
ThroughputThe results of Throughput as per different number of nodes it mentioned in the graphical diagram
shown above in Figure 6.2 .2 by using three protocols which are PUMA, RTCP and RTP. The results again
shows that at the value of nodes 20 all are (PUMA, RTCP and RTP) negligible (zero) and between the range of
20 to 60 the results sudden increases. As I see at the point of node 60 value are (PUMA=674, RTCP=683 and
RTP= 604).After increasing number of nodes up to 60 to 150 I get constant value of RTCP and the values after
150 node decreases till the value at the point of 200 nodes. Then I observed that the values sudden increases
and increase in continuous manner at the value of 250 and 300 node. The green line show RTP at the range of
60 to 100 nodes It also increasing simultaneously at the range of 100 to 200 nodes I got a constant relationship
between Throughput and nodes. The blue lines in the graph shows the results of PUMA, as I see at the range of
60 to 150 results are continuously increases and then at the point of 200 nodes it decreases. I also observed
that after the point of 200 nodes the values It continuously increasing till nodes 250 and 300.6.2 .3 Simulation of
End To End Delay Text Data Figure 6.2.3: Results of End To End Delayno. of nodes PUMA RTCP RTP 20 60
100 150 200 250 300 0 160.282 134.627 67.262393 282.214 244.859 203.074 0 140.752 126.416 164.999
245.058 141.4 210.603 0 407.514 128.574 222.823 265.084 238.459 210.971 Table 6.2.3: Table End To End
DelayThe above graph shows that the particular values of END TO END DELAY with respect to different
number of nodes. For example like on 20 values of all PUMA, RTCP and RTP are negligible and in between the
range of 20 to 60 these values are continuously increasing. At the point of node 60, the values show are
(RTP=407), (RTCP=140) and (PUMA=160). It was also noted that the RTP is very high at node 60 as compare
to RTCP and PUMA. We also observed some other similar values which are also shown in the graph. At the
point of node 100 values are (RTP=128), (RTCP=126) and (PUMA=134) and at point of node 150 values are
(RTP=222), (RTCP=164) and (PUMA=67). As we see the final node of number 300 we got the result
(RTP=2101), (RTCP=210) and (PUMA=203). Now it may be observed that a minimum fluctuation presents in
RTCP graph. With the help of this we easily concluded that RTCP is the better than as compare to the
remaining two (PUMA and RTP). 6.2 .4 Simulation of Energy Text dataFigure 6.2.4: Results of ENERGY No. Of
N odes PUMA RTCP RTP 20 0 99.995464 0 60 24.215623 23.948835 23.64057 100 24.245338 23.962544
23.92842 150 159.323 81.025981 65.874751 200 73.80338 98.937597 64.329921 250 64.177742 65.69534
64.520611 300 62.551588 63.612068 63.252814 Table 6.2.4 Table Energy The section here shows the result of
Energy graph at particular value on the different number of node like 20 in which two protocols which represent
the same results (PUMA and RTP) are negligible and the other one (RTCP) provide the high Energy at point of
20 node. The green line of RTP is continuously increases for the value of Energy. The red line of RTCP is in
range 20 to 60 node shows the sudden decrement and the all remaining results are constant at 60 to 100 nodes.
After this the continuous increment is shown in the values between 100 to 200 nodes. I also observed that the
value of 250 nodes is decreases and in between 250 to 300 remains constant. I also noted that in case of PUMA
at the range of 20 to 100 node fluctuation of results are same but at 150 nodes I got the value is very high and
after that a sudden decrement was observed at 200 node then I pointed the value of 250 to 300 nodes which
shows the constant result. I also concluded here that the best protocol is RTP bec ause it shows the results in
continuously increment manner.6.3Result of MP4 Video 34MB 6.3.1 Simulation of Packet Delivery Ratio MP4
Video 34MBFigure 6.3.1: PDR MP4 VideoNo. of node PUMA RTCP RTP 20 0.0775 0.0508 0.2578 60 0.7991
0.5299 0.3526 100 0.7973 0.4903 0.267 150 0.0491 0.049 0.267 200 0.0243 0.0497 0.0239 250 0.0231 0.0432
0.0224 300 0.0149 0.0263 0.0213 Table 6 .3.1: PDR MP4 Video 34MBPacket delivery ratio can be stated as the
ratio of the number of packets sent by source using these three protocols. We achieve the same value at the
point of 20 nodes PUMA and RTCP. RTP achieve the high value compare the other protocols at the PDR radio
between the 20 to 60 nodes these protocols rise the value. Between 60 to 100 nodes we achieve the constant
value. In the initially state of 150 nodes between the ranges of 300 nodes PDR ratio are decreases. 6.3.2:
Simulation of Throughput MP4 Video 34MBFigure 6 .3.2: Throughput MP4 Video 34MBNo. of node PUMA
RTCP RTP 20 74.13 46.96 772.68 60 707.72 485.64 775.34 100 707.7 485.5 773.59 150 77.37 38.08 77.94
200 71.04 39.21 81.18 250 80.1 42.23 80.7 300 78.39 28.31 79.04 Table 6 .3.2: Throughput MP4 Video
34MBFrom the figure it has been clear in this throughput graph we are using three protocols RTP, RTCP and
PUMA . Compare between all protocols with using different number of nodes firstly we see the same value of
RTCP and PUMA at number of node 20 if we see the case of RTP so we have the maximum throughput value.
After the range of number of nodes between 20 to 100 we see the throughput is rises to all protocols then
passing number of node 100 we see the all protocols suddenly fall come to the number of node 150. We have
the constant value 150 to 300 nodes.6.3.3 Simulation of End To End Delay MP4 Video 34MBFigure 6.3.3: End
To End Delay MP4 Video 34MBNo. of node PUMA RTCP RTP 20 2282.43 1232.34 380.267 60 360.461
238.443 381.445 100 360.718 238.297 382.39 150 1719.41 2126.06 3541.18 200 4079.02 2173.78 4079.02
250 4619.42 2417.08 4284.62 300 4082.15 3017 4456.71 Table 6.3.3: End To End MP4 Video 34MBFigure
6.3.3 represented above gives the values of end to end delay for all different number of nodes. We get the value
of number of node 20 are (RTP = 380.267, RTCP= 1232.34 and PUMA = 2282.43) all protocol get different
delay. We get the number of node 60 at the all protocol we see minimum delay. It is delay remain constant
between the number of node 60 to 100 then increase again after passing the number of node 100 continuous
increase at the range of between the number of node 100 to 300.6.3.4: Simulation of Energy MP4 Video
34MBFigure 6.3.4: Energy MP4 Video 34MB No. of node PUMA RTCP RTP 20 23.686966 27.541931 4.27532
60 99.972618 7.686805 5.368983 100 99.962917 5.996295 4.990293 150 54.086836 38.933679 38.188198
200 63.054086 45.529228 51.503346 250 89.378502 76.843204 51.5246 300 65.275816 53.982704 52.0826
Table 6 .3.4: Energy MP4 Video 34MBNext we focus on graphic al figure present in Figure 6.3.4 which gives
results for all protocols of MP4 video 1 (34MB) for the value of energy between number of different nodes. We
achieve the minimum value of RTP at the number of node 20 and we achieve the maximum value RTCP and
PUMA at the number of node 20 then the case of RTP we are increase number of nodes then energy
consumption is always increase to the all number of nodes and PUMA is all increase but the range of between
number of node 20 to 100 then decrease at the node 150 but after 150 again increase the energy consumption.
RTCP get the different value compare to both protocols. RTCP use minimum energy at the number of node 100
after then the increase the energy to the all number of nodes.6.4 Result of MP4 Video 53MB 6.4.1 Simulation of
Packet Delivery Ratio MP4 Video 53MBFigure 6.4.1: PDR MP4 Video 53MBNo. of node PUMA RTCP RTP 20
0.0709 0.0179 0.0682 60 0.3645 0.1728 0.3526 100 0.3379 0.1682 0.267 150 0.028 0.0178 0.0275 200 0.0297
0.0187 0.0383 250 0.0241 0.0324 0.0785 300 0.0175 0.0292 0.0433 Table 6.4.1: PDR MP4 Video 53MBThe
result of packet delivery ratio between different number of nodes use RTP, RTCP and PUMA. We have got the
reliable PDR value RTCP at number of node 20 and RTP and PUMA have got the same value at the number of
node 20 this value is higher than the RTCP. After passing the node 20 we have got the increase PDR at the
number of node 100 then decrease the value at the number of node 150 and all protocol provided to same value
at the point of 150 nodes. After 150 nodes we have got the minimum increase the PDR value till the number of
node 300 and we achieve almost similar value of RTP, RTCP and PUMA. 6.4.2 Simulation of Throughput MP4
Video 53MBFigure 6.4.2: Throughput MP4 Video 53MNo. of node PUMA RTCP RTP 20 84.67 10.63 84.67 60
678.63 160.53 678.63 100 74.68 158.16 677.16 150 70.57 13.27 70.57 200 79.57 13.24 206.07 250 84.81 29.54
270.98 300 77.14 27.49 244.3 Table 6.4.2: Throughput MP4 Video 53MBAs we focus of Figure 6.4. 2 shows
below given the throughput and RTP, RTCP and PUMA with number of different nodes are use. As we see the
number of node 20 we achieved the zero throughput at RTCP and RTP achieved the value is near about zero
but the case of PUMA we achieved the large value of throughput. Number of node 60 we see the PUMA it is
continuously decrease till the point of 100 node then we achieved constant value at the number of node 300 if
we see the RTP and RTCP they both are increase at the point of 100 node then passing number of node 100
they both are decreases at the point of 150 node. RTCP achieved the increases throughput till the point of 150
to 300 number of node but got the minimum values at all point and RTP is also similar to RTCP but it is
achieved the large value compare to RTP.6.4.3 Simulation of End To End Delay MP4 Video 53MBFigure 6.4.3:
End To End Delay MP4 Video 53MBNo. of node PUMA RTCP RTP 20 2091.29 1197.08 2091.29 60 306.444
120.981 306.444 100 335.268 122.656 307.27 150 3553.21 1208.04 3553.21 200 3683.71 1255.93 1999.94
250 4136.05 1062.9 2148.57 300 4813.5 1113.85 2123.9 Table 6.4.3: End To End Delay MP4 Video 53MBThe
above graph shows the result of end to end delay of different protocols RTP, RTCP and PUMA with the help of
different nodes. The result shows at the number of node 20 we have got same delay case of RTP and PUMA but
the case of RTCP we have got minimum delay. Then all protocols are the suddenly decreases delay till number
of node 100 after passing number of node 20. RTP at the number of node 100 to 200 we have got the decreases
delay then after increases the delay at the point of 300 numbers of nodes and the case of RTCP and PUMA we
have got the increases delay continuously at the point of node 300. 6.4.4: Simulation of Energy MP4 Video
53MBFigure 6.4.4: Energy MP4 Video 53MBNo. of node PUMA RTCP RTP 20 17.404559 58.539981
17.404559 60 7.08699 6.370936 7.08699 100 49.27431 4.294595 6.996246 150 48.518487 26.436306
17.404559 60 7.08699 6.370936 7.08699 100 49.27431 4.294595 6.996246 150 48.518487 26.436306
48.518487 200 51.572734 38.683915 69.637226 250 89.413474 94.255235 98.131849 300 77.068965
90.782951 94.669218 Table 6.4.4: Energy MP4 Video 53MBA figure 6.4. 4 we see the result of energy with
different number of nodes first of all we see the number of node 20 where RTP =17.404559, RTCP = 58.539981
and PUMA = 17.404559. All protocols are meeting at the point of node 60 and use small energy consumption.
After number of node 60 to 100 we get have minimum energy used then number of node 100 between the range
of 250 node they all protocols increases the value but we see the number of node 300 we achieved minimum
energy.6.5 Result of MP4 Video 104MB6.5.1 Simulation of Packet Delivery Ratio MP4 VideoFigure 6.5.1: PDR
MP4 Video 104MBNo. of node PUMA RTCP RTP 20 0.0834 0.0354 0.0834 60 0.2814 0.2852 0.2814 100
0.2229 0.2846 0.2229 150 0.0308 0.0452 0.0308 200 0.0282 0.0316 0.0282 250 0.0227 0.0413 0.0227 300
0.0154 0 0.0227 Table 6.5.1: PDR MP4 Video 104MBThe previous portion of this section compares the results
of PDR and protocols are RTP, RTCP and PUMA for different number of nodes we used. Now in this portion of
the section we try to give a view of results with the help of packet delivery ratio. We get the value of PDR (RTP=
0.0834, RTCP= 0.0354 and PUMA= 0.0834) at the number of node 60 we get increases the value of RTP,RTCP
and PUMA then RTCP is constant till the number of 100 but we see the RTP and PUMA both are decreases. At
the number of node 150 we get decreases the PDR value till the number of node 300 6.5.2 Simulation of
Throughput MP4 Video 104MBFigure 6.5.2: Throughput MP4 Video 104MBNo. of node PUMA RTCP RTP 20
100.12 31.18 314.67 60 424.72 271.9 1334.82 100 424.22 270.82 1333.25 150 74.81 29.03 235.12 200 79.73
13.21 250.59 250 78.78 28.03 247.59 300 70.3 3.52 247.59 Table 6.5.2: Throughput MP4 Video 104MBS
imilarly as shown in Figure 6.5.2 present below represent the value of through put for different protocols with use
different number of nodes graphically. The previous results of this throughput are not constant continuously.
Shows in this fig at number of node 20 we reached the zero value of RTCP and PUMA if we see the RTP they
are reached great value compare to both at that time rapidly increase at the node 60 later than node 60 to node
100 they all protocols providing constant values at that time all are rapidly decreases at the number of node 150.
At that time we reached constant throughput between the ranges of number of node 150 to 300.6.5.3: Simulation
of End To End Delay MP4 Video 104MBFigure 6.5.3: End To End Delay MP4 Video 104MBNo. of node PUMA
RTCP RTP 20 1909.8 926.403 264.517 60 263.338 142.769 263.338 100 263.197 143.024 263.197 150
3674.93 1082.58 3674.93 200 3685.39 1331.79 3685.39 250 3703.86 1397.97 3703.86 300 4418.65 1763.65
4418.65 Table 6.5.3 End To End Delay MP4 Video 104MBThe next Figure 6.5.3 is of end to end delay and
different number of nodes among RTP, RTCP and PUMA. We obtain the value of RTP = 264.517, RTCP =
926.403 and PUMA = 1909.8 on the number of node 20. At that moment of number of node 60 we obtain the
express reduce value at the moment of number of node 60 to 100 remain constant later than node 100 they
enhance end to end delay continuously at the number of node 300 for all protocols RTP,RTCP and PUMA. 6.5.4
Simulation of Energy MP4 Video 104MBFigure 6.5.4 Energy MP4 Video 104MBNo of nodes PUMA RTCP RTP
20 10.768906 11.793685 10.768906 60 8.714629 6.630032 8.714629 100 8.72243 5.491185 8.72243 150
43.636515 44.258201 43.636515 200 54.683368 79.654864 54.683368 250 89.415469 70.042887 89.415469
300 78.269094 90.374792 89.415469 Table 6.5.4: Energy MP4 Video 104MBThe portion explained the different
value of number of nodes for all protocols RTP, RTCP and PUMA in this graph we calculate the energy. To start
with node 20 we search out the result of RTP, RTCP and PUMA obtain the near about same values at the
moment of node 60 to node 100 we obtain the smallest amount energy consumption. After leave behind node
100 we obtain the large amount of energy are use till the number of node 300. 6.6 Result of MP4 Video
400MB6.6 .1 Simulation of Packet Delivery Ratio MP4 Video 400MB Figure 6.6.1: PDR MP4 Video 400MBNo.
of node PUMA RTCP RTP 20 0.1523 0.692 0.0805 60 0.1372 0.1161 0.0874 100 0.1262 0.1262 0.0735 150
0.095 0.095 0.095 200 0.053 0.053 0.0362 250 0.1683 0.1683 0.0308 300 0.0713 0.0713 0.023 Table 6.6.1:
PDR MP4 Video 400MBIn this graphical representation of packet delivery ratio of using in this three protocols
RTP, RTCP and PUMA. I observe that node 20 RTP shows the minimum value near about zero and PUMA
value is 0.1523 but RTCP value is highly increasing the number of node 60 RTCP sudden decrease but RTP
and PUMA both are similar to previous node till at the node of 100 both are decreasing RTP and PUMA but the
RTCP increases case of PDR. After then number of node 150 we observed same value to all protocols. At the
point of 200 nodes all three protocols sudden decreases till the point of 250 node RTCP and PUMA both are
similar value to providing increase PDR but the RTP continuously decrease range of 200 to 300 and point of
300 nodes both protocols are decreases. Table 6.6.2: Table Throughput MP4 Video 400MB No. of node PUMA
RTCP RTP 20 46.76 62.62 96.53 60 43.65 110.09 127.08 100 111.58 111.58 127.11 150 37.27 37.27 42.59
200 32.27 32.27 91.19 250 41.61 41.61 97.65 300 44.87 44.87 92.04 Table 6.6.2: Table Throughput MP4 Video
400MBIn this throughput section we are using basically three protocols. All protocols provide the values at
different number of nodes. At the point of node 20 (RTP =96.53) gives the higher value compare to both
protocols (RTCP = 62.62 and PUMA = 46.76) then till the number of node 60 we get the value of RTP and
RTCP increase the value and constant range of 60 to 100 nodes but the case of PUMA decrease the value of
node 60 then increase the value of node 100. After then all protocols get the decreasing value till the number of
node 150 then the RTP value increase continuously at the range of 150 to 250 and the value of 300 nodes
suddenly decrease the throughput value. PUMA and RTCP get the similar values at the range of 100 to 300
nodes after 200 nodes they are increase the value continuously at the node of 300. 6.6.3 End to End Delay MP4
Video 400MBFigure 6.6.3: End To End Delay MP4 Video 400MBNo. of node PUMA RTCP RTP 20 988.156
275.74 1268.77 60 307.615 258.078 263.597 100 255.318 255.318 264.021 150 768.546 768.546 768.546 200
1060.74 1060.74 1563.78 250 885.385 885.385 1370.06 300 1028.96 1028.96 1472.04 Table 6.6.3: End To
End Delay MP4 Video 400MBIn this end to end delay video we see the value number of node 20 (RTP =
End Delay MP4 Video 400MBIn this end to end delay video we see the value number of node 20 (RTP =
1268.77, RTCP = 275.74 and PUMA = 988.156) we got the minimum value case of RTCP till the range between
number of nodes 20 to 60 are constant value. RTCP and PUMA both are suddenly decrease the value at the
number of node 60 then till the constant value at the number of node 100 after we observe the value of nodes
are increasing at continuously manner but in the case of RTP we also achieve the same value till the number of
node 150 then we are increasing the number of nodes that time delay also increasing.6.6 .4 Simulation of
Energy MP4 Video 400MB:Figure 6.6.4: Energy MP4 Video 400MBNo. of node PUMA RTCP RTP 20
23.020195 22.469671 9.307384 60 98.218443 61.058086 51.432641 100 99.962917 99.962917 51.0195 150
58.899391 58.899391 36.490399 200 69.756692 69.756692 0.051658 250 98.217664 98.217664 38.585532
300 92.48345 92.483454 1.814728 Table 6.6.4: Energy MP4 Video 400MBIn the above graph number of nodes
represent on abscissa and energy represent on ordinate, this graph represents the relationship between number
of nodes and energy on different protocols which are PUMA, RTCP and RTP. At the 20 node we get the same
value of PUMA and RTCP but the case of RTP is the minimum energy required then the range of 20 to 100
number of nodes continuously increases there energy to the protocols. After the node 100 we get the fall values
between the numbers of nodes 100 to 200 then again increasing the energy at the 250 node then suddenly
decrease the value of 300 nodes. PUMA and RTCP get the same value of between the ranges of 100 to 300
numbers of nodes. 6.7 Result of MP4 Video 154MB6.7 .1 Simulation of Packet Delivery Ratio MP4 Video
154MB Figure 6.7.1: PDR MP4 Video 154MBNo. of node PUMA RTCP RTP 20 0.0871 0.0246 0.1488 60
0.2175 0.3 0.3242 100 0.1803 0.2616 0.3239 150 0.0381 0.062 0.0715 200 0.0266 0.0473 0.0436 250 0.0257
0.0644 0.1084 300 0.0178 0.0145 0.0483 Table 6.7.1: PDR Video 154MBThis is the last portion of packet
delivery ratio MP4 video on number of different nodes with different protocols RTP, RTCP and PUMA. We get
the different values are obtain RTP= 0.1488, RTCP= 0.0246 and PUMA= 0.0871 on the number of node 20.
After leave behind we get the large amount of packet delivery ratio on the number of node 60 after leave behind
node 60 to 100 are get the small amount of PDR. after leave behind node 100 between the range of number of
node 300 they are continuously get the small amount of PDR to all protocols but the RTP get the large amount
of PDR compare to other protocols if we see all protocol on this portion then PUMA is best just because of they
are continuous decrease PDR then number of nodes increases. 6.7.2 Simulation of Throughput MP4 Video
154MB:Figure 6.7.2: Throughput MP4 Video 154MBNo. of node PUMA RTCP RTP 20 102.74 13.49 298.15 60
286.74 285.45 901.49 100 286.92 282.93 901.49 150 84.64 63.45 227.22 200 73.73 52.96 182.64 250 87.19
60.65 256.47 300 76.15 40.8 233.08 Table 6.7.2: Throughput MP4 Video 154MBThe next section of through put
video4 we use the 154 MB video size for different number of nodes on the different protocols like RTP, RTCP
and PUMA. If we observe the node 20 to all protocols are provided different value at this point. In that case all
protocols suddenly raise the through put on the number of node 60 then continuous on the number of node 100
all are provided same values. In this case of number of node 150 they all are provided small amount of through
put value then similar changes or remain constant value node 150 to number of nodes 300. Then again RTP get
the large amount of through put compare to other protocols if we see all protocol on this portion then PUMA is
best just because of they are continuous decrease through put then number of nodes increases. 6.7 .3
Simulation of End To End Delay MP4 Video 154MBFigure 6.7.3: End To End Delay MP4 Video 154MBNo. of
node PUMA RTCP RTP 20 1784.59 1193.97 1104.44 60 262.52 257.397 253.425 100 263.32 287.415 253.445
150 3406.79 2511.5 1403.55 200 4149.41 3962.57 944.718 250 3250.75 4087.21 1356.23 300 4222.59 5621.2
1484.4 Table 6.7.3: End To End Delay MP4 Video 154MBThe next Figure 6.7.3 shows below the representation
of value of end to end delay for MP4 video 4 on different number of nodes In that case of RTP and RTCP given
same value on the number of node 20 but the PUMA are given large amount of delay compare to RTP and
RTCP. In this case of node 60 we get hold of small amount of delay compare a node 20 then number of node 60
to 100 they get hold of constant value given. In that case of number of node 150 they are given value of this point
provided large amount of delay. After that moment we get hold of small amount of delay on node 200 for all
protocols. In this case of 250 or 300 number of nodes we get hold of large amount end to end delay. If we see
the RTP are given small amount of delay and RTCP gives large amount of delay. 6. 7.4 Simulation of Energy
MP4 Video 154MBFigure 6.7.4: Energy MP4 Video 154MBNo. of node PUMA RTCP RTP 20 11.51913
82.266605 16.906012 60 10.743029 10.638985 99.972618 100 10.821888 8.988981 99.962917 150 33.99073
32.491634 45.601225 200 56.264679 44.698446 60.480242 250 89.427428 76.962917 98.063649 300
74.962409 62.000813 92.479097 Figure 6.7.4: Table Energy MP4 Video 154MBThe next and last type of MP4
video is 154 MB presented above in Figure 6.33 represents the result of energy for different number of nodes on
the behalf of PUMA, RTP and RTCP. We obtain the value of PUMA =11.51913, RTCP= 82.266605 and RTP=
16.906012 on the number of node 20. At that moment of number of node 60 we obtain the value of PUMA
=10.743029, RTCP= 10.638985 and RTP= 99.972618 on the number of node 60. At the moment of number of
node 100 we obtain the constant values similar to node 60. At the moment of passing 100 node we obtain the
large amount of value till the number of node 250 then sudden on the number of node 300 we obtain small
amount of energy are required. If we see the RTP they are required large amount of energy for transmission to
increase number of nodes but PUMA firstly required minimum energy consumption after increases node then
energy increases with nodes. 6. 8 Result of HD Video 353MB6. 8.1 Simulation of Packet Delivery Ratio HD
Video 353MB Figure 6.8 .1: PDR HD Video 353MBNo. of nodes PUMA RTCP RTP 20 0.0852 0.1997 0.0676
60 0.0988 0.1429 0.1316 100 0.0831 0.1428 0.1193 150 0.0497 0.094 0.0556 200 0.0367 0.0533 0.0533 250
0.0578 0.1668 0.1668 Table 6.8.1: PDR HD Video TableIn this section I have observe the packet delivery ratio
at nodes of 20 (RTP value is 0.0676, RTCP value is 0.1997 and PUMA value is 0.0852) at the minimum nodes
of 20 I get the value of the higher PDR in RTCP, as I shown into the graph I get a value of RTCP decrease while
of 20 I get the value of the higher PDR in RTCP, as I shown into the graph I get a value of RTCP decrease while
increases the value of number of nodes and remaining two other nodes increases the value of PDR with
increase the number of node 60 after that point the value of PDR decrease in all cases (PUMA,RTCP, RTP) till
the value of 200 nodes after that point the value of RTCP,PUMA and RTP going to rise at the value of 250 node
the value of RTCP and RTP decrease till the nodes of 300 while the value of PUMA is constant in between the
range of 250 to 300. 6.8 .2 Simulation of Throughput HD Video 353MB:Figure 6.8.2: Throughput HD Video
353MBNo. of node PUMA RTCP RTP 20 296.93 171.33 215.76 60 395.53 396.76 392.05 100 395.61 396.75
392.01 150 292.99 133.12 183.18 200 274.66 111.87 111.87 250 286.73 139.71 139.71 300 151.98 151.98
151.98 Table 6.8.2: Throughput HD Video 353MBWe are noted in this portion the value of through put (the value
of packet delivery ratio per unit time) begging the 20 number of nodes the value of PDR increase the 60 number
of nodes and after that point its constant till the 100 number of nodes. After the value of 100 nodes the PDR
decrease with increase the number of nodes and I get a same value all protocols on the node 300.6.8 .3
Simulation of End To End Delay HD Video 353MBFigure 6.8.3: End To End Delay HD Video 353MBNo. of
node PUMA RTCP RTP 20 1420.12 1006.72 1234.6 60 263.904 255.122 258.307 100 264.39 255.157 261.783
150 1444.02 844.361 1346.92 200 2233.95 942.353 942.353 250 2233.99 895.521 895.521 300 1085.83
1085.83 1085.83 Table 6.8.3: End To End HD Video 353MBThe average delay for data packets (End to End
Delay) I get idea from above graph the value of (End to End Delay) increase the number of data packets is range
of 20 to 60 nodes and the delay of data packets constant in between range of 60 to 100 nodes while the value of
all protocols gets rise after the 100 number of nodes and the maximum delay occurs in case of PUMA till the 250
and all the protocols get same value on the 300 nodes. 6.8 .4 Simulation of Energy HD Video 353MBFigure
6.8.4: Energy HD Video 353MBNo. of node PUMA RTCP RTP 20 9.838072 17.059733 20.062773 60
53.068768 99.972618 56.446071 100 52.865352 99.962917 53.45477 150 13.850783 57.275225 24.847195
200 14.809515 68.178943 68.178943 250 17.059157 98.144586 98.144586 300 91.983441 91.983441
91.983441 Figure 6.8.4: Energy HD Video 353MBThe above graph shows that the maximum energy
consumptions in between the range of 20 to 60 nodes for RTCP and the I get that average energy consumptions
value of RTP and PUMA in between range of 60 to 100. After that the nodes of all protocols get decreasing in
order till the 150 nodes and the value of two protocols increase like RTP and RTCP till the range of 250, and the
valve of RTP still constant till the nodes of 250, and RTP and RTCP again decrease its value I get a same value
all protocols on the node 300. 6.9 Result of HD Video 172MB6.9 .1 Simulation of Packet Delivery Ratio HD
Video 172MBFigure 6.9.1: PDR HD Video 172MBNo. of node PUMA RTCP RTP 20 0.0829 0.1363 0.1363 60
0.2014 0.2912 0.2912 100 0.1687 0.291 0.296 150 0.0364 0.0665 0.0885 200 0.0281 0.0496 0.0496 250
0.0275 0.1174 0.1174 300 0.0197 0.1188 0.1197 Figure 6.9.1: Table PDR HD Video 172MBIn case of this HD
video we have observe at 20 number of nodes we get average value of all protocols the value of PDR
continuously increase till the 60 number of nodes and the all protocol value decrease that point 200 nodes then
the RTP and RTCP become increase between the number of nodes 200 to 250 while the value of puma is
decrease but the value of PDR for RTCP and RTP become constant for a upcoming number of nodes. 6.9 .2
Simulation of Throughput HD Video 172MB:Figure 6.9.2: Throughput HD Video 172MBNo. of node PUMA
RTCP RTP 20 304.86 258.62 258.62 60 807.5 808.74 808.74 100 807.81 808.74 808.74 150 262.13 205.85
205.86 200 247.59 203.94 203.84 250 300.13 255.25 255.45 300 259.23 255.93 255.93 Figure 6.9.2: Table
Throughput HD Video 172MB100 to 150 number of nodes then the 150 to 200 nodes we get the constant value
for the RTP and RTCP but PUMA is decrease between the range of 150 to 200 nodes after than range of 200 to
300 nodes the value is increase of RTP and RTCP while the value of PUMA are also increase 200 to 250
number of nodes but after 250 node we analysis the value of PUMA is decrease at the range of 250 to 300
nodes.6.9 .3 Simulation of End To End Delay HD Video 172MBFigure 6. 9.3: End To End Delay HD Video
172MBNo. of node PUMA RTCP RTP 20 2019.99 1331.03 1331.03 60 262.749 254.095 254.095 100 263.765
254.113 254.267 150 3208.36 1098.53 1027.54 200 4238.03 988.635 998.53 250 3085.38 1459.83 1478.62
300 3482.66 1478.96 1498.71 Table 6. 9.3: Table End To End Delay HD Video 172MBIn the initial stage the
PDR value are working smoothly in between the range of 20 to 60 the value of delay decrease till the point of 60
and the same value going on at the point of 100 nodes suddenly the value of PDR (end to end delay) increase
the maximum value of PUMA at the number of 200 nodes while we get a maximum delay in case of RTP and
RTCP between the range of 250 to 300 as well as in between this range both protocol (RTP and RTCP) having
a constant value. 6.9 .4 Simulation of Energy HD Video 172MB Figure 6. 9.4: Energy HD Video 172MBNo. of
node PUMA RTCP RTP 20 11.703547 16.468435 16.468435 60 11.421269 99.972618 99.927543 100
11.509691 99.962917 99.987642 150 31.963479 41.555257 41.378467 200 15.603376 44.127524 44.254631
250 89.377051 98.266412 98.669627 300 83.161928 98.742593 98.683367 Table 6.9.4: Table Energy HD
Video 172MBWe get similar value of all protocol at the number of node 20 the value of RTP and RTCP
suddenly rise after the passing of 20 number of nodes till the 60 number of nodes while the value of PUMA
continuously constant in the range of 20 number of nodes to 100 number of nodes. The value of RTCP and RTP
decrease after the 100 number of nodes till the 150 while the value of PUMA in increase 100 number of node to
150 node. When all three protocols passing the value of 200 nodes the value is increase of consumption of
energy till the point of 250 node and after 250 node the value of RTP and RTCP become at constant while the
value of PUMA decrease in the range of 250 to 300. CHAPTER 7 CONCLUSION CHAPTER 7CONCLUSION
7.1 ConclusionThe simulation scenario consists of two different video types HD and MP4 with four parameters
like packet delivery ratio, throughput, end to end delay and Residual energy of node to compare the
performance of three routing protocols PUMA, RTP and RTCP with the help of these simulation metrics,
comparison among routing protocols was done. The thesis compares text data, HD and MP4 videos. In text
comparison among routing protocols was done. The thesis compares text data, HD and MP4 videos. In text
data, it was shown that the overall performance of RTCP is better for throughput, end to end delay and residual
energy but for PDR, RTCP and RTP both perform same. Thus as a complete RTCP gives better performance. In
HD video, two sizes were chooses. They are 172MB and 353MB, for 172MB the value of PDR and residual
energy, RTP performs well. The value of end to end delay and residual energy by RTCP was better than other
and for throughput PUMA performs. Hence for overall scenario RTCP and RTP both works. For 353MB, the
performance of RTCP for PDR, end to end delay and residual energy was better and for throughput PUMA
performance. So for both HD videos RTCP performs well. For MP4, video sizes taken was 400MB and 154MB
which near to the sizes of HD videos. We are tried to take video sizes of at most same. For 154MB, RTP
performs well in all the four parameters PDR, throughput, end to end delay and energy. For 400MB,
performance of PUMA in PDR and energy is better but RTP performs for throughput and RTCP performs in end
to end delay. Hence over all performance of MP4 any routing protocols cannot perform well. It is always non-
deterministic to find out the suitable routing protocols. In conclusion, the performance of RTCP routing protocol
was better than both the RTP and PUMA routing protocols. 7.2 Future Work This thesis work considers only
single hop WiMax network. However with increased capacity demand, Wireless networks are becoming more
complex in terms of topology. An associated standard, 802.16 provides the specifications for deploying multiple
relay stations within the range of a base station for improving coverage. While our algorithm can be applied to
such deployments it may be improved further by incorporating the capacity fluctuations due to different channel
conditions between the base station and different relay stations. : The upcoming Wireless Release 2 or 802.16
standard has several proposed facilities for improving video streaming services while still being backward
compatible with the current 802.16 standard. The new standard has enhanced MBS mode such that it can
dynamically switch between multicast and unicast mode. It will be interesting to see how our routing protocols
can be adapted to this enhancement such that the video streaming performance can be improved even in the
presence of variable-bit-rate unicast traffic demands. The 802.16 standard also has improved sleep mode
operations to further improve energy efficiency. REFERENCES [1] Chandra .R, Saravanaselvi .P. A Key issue
on multicast video streaming over Wireless network. December 2012, Volume. 2, Issue 12.[2] S. Sharangi, R.
Krishnamurti and M. Hefeeda, "Energy-Efficient Multicasting of Scalable Video Streams Over Wireless
Networks," IEEE Transactions on Multimedia, vol. 13, no. 1, pp. 102-115, Feb. 2011.[4] R. M. Matos, P. Neves,
and S. Sargento, "Evaluating WiMAX QoS performance in a real testbed," in Proc Conf. on Telecommunications
- ConfTele, Santa Maria da Feira, Portugal, May, 2009.[3] Vicuna Nelson, Jimenez Tania and Hayel Yezekael
"Performance of SCTP in Wi-Fi and WiMAX networks with multi-homed mobiles" WNS2, October 24, 2008,
Athens, Greece. [43] M. Mushtaq and T. Ahmed. Smooth Video Delivery for SVC Based Media Stream- 2008.
CCNC 2008. 5th IEEE, pages 447{451, 2008. [5] Vicuna Nelson, Jimenez Tania and Hayel Yezekael
"Performance of SCTP in Wi-Fi and WiMAX networks with multi-homed mobiles" WNS2, October 24, 2008,
Athens, Greece. [6] R. Mahmood, M. I. Tariq et al., "A Novel Parameterized QoS based Uplink and Downlink
Scheduler for Bandwidth/Data Management over IEEE 802. 16d Network," IJRTE, vol. 2, no. 1, November 2009,
pp. 42-46.[7] IEEE Std. for Local and MAN Part 16:Air Interface for Fixed Broadband Wireless Access Systems-
Amendment 2: MAC Modifications and Additional Physical Layer Spec. for 2-11 GHz, Std 802.16a-2003
(Amendment to IEEE Std 802.16-2001), 2003.[8] IEEE Std. 802.16-2001 IEEE Standard for Local and MAN Part
16:Air Interface for Fixed Broadband Wireless Access Systems, 2002.[9] Shelly Kalra, Multicasting of Video
Streams over Wireless Networks - A Review, (IJSR), Volume 3 Issue 5, May 2014.[10] Sheraz Maki Mohd
Ahmed, Aisha-Hassan A. Hashim, Othman O Khalif, Tahani Abdullah and Marwa Yousif, Video streaming over
Wireless Networks, IJCSNS, VOL.14 No.11, November 2014.[11] Gopikrishnan.R, Ms. J.R.Thresphine, An
Efficient Real Time Video Multicasting Protocol and WLANs Cross-Layer Optimization in IEEE 802.11N,
IJCSMC, Vol. 3, Issue. 2, February 2014, pg.811 - 814.[12] Bilal Ahmed, M. Junaid Arshad, Jumshed Akhtar.,
Simulation And Comparative Analysis Of Video Streaming Over Adsl & Wireless Network,
Sci.Int(Lahore),26(2),663-667,2014.[13] Jaswant Kumar Joshi, Devendra Singh Bais, Amar Nath Dubey,
Analyzing Video Streaming Quality over Different Routing Protocols on Mobile Ad-hoc Network, IJARCSSE,
Volume 3, Issue 10, October 2013.[14] M. Imran Tariq, M. Ajmal Azad, Razvan Beuran, and Yoichi Shinoda,
Performance Analysis of VoIP Codec's over BE Wireless Network, International Journal of Computer and
Electrical Engineering, Vol. 5, No. 3, June 2013.[15] K. Sakthisudhan, P.Thangaraj, C.Marimuthu, Comparative
Analysis of Video Streaming Services in H.323 Application layered protocol coexisting of WLAN with Wireless
Broadband Standard networks, International Journal of Computer Applications (0975 - 8887) Volume 46- No.2,
May 2012.[16] Chandra. R, Saravanaselvi. P, A Key Issue on Multicast Video Streaming Over
WIRELESSNetwork, International Journal of Scientific and Research Publications, Volume 2, Issue 12,
December 2012.[17] Swarna Parvathi.S, K.S.Easwarakumar, Performance Evaluation of Multicast Video
Streaming over Wireless, IJAIS , Volume 3- No.4, July 2012.[18] Somsubhra Sharangi, Ramesh Krishnamurti,
And Mohamed Hefeeda, Energy-Efficient Multicasting Of Scalable Video Streams Over Wireless Networks,
IEEE Transactions On Multimedia, Vol. 13, No. 1, February 2011[19] G. SASI, Issues on Wireless Networks and
Cryptography, International Journal of Advanced and Innovative Research (2278-7844) / 106 / Volume 4 Issue
4.[20] Cheng-Hsin Hsu And Mohamed Hefeeda, A Framework For Cross-Layer Optimization Of Video
Streaming In Wireless Networks, ACM Transactions on Multimedia Computing, Communications and
Applications, Vol. V, No. N, December 2009.[21] William (Will) Hrudey, Streaming Video And Audio Content
Over Mobile Wireless Networks, Simon Fraser University Summer 2009 All.[22] Shamik Sengupta, Mainak
Chatterjee And Samrat Ganguly. Improving Quality Of Voip Streams Over Wireless, IEEE, 2007.[23] Heiko
Schwarz, Detlev Marpe, And Thomas Wiegand, Overview Of The Scalable Video Coding Extension Of The
Schwarz, Detlev Marpe, And Thomas Wiegand, Overview Of The Scalable Video Coding Extension Of The
H.264/AVC Standard, IEEE Transactions On Circuits And Systems For Video Technology, Vol. 17, No. 9,
September 2007.[24] Thomas Schierl, Karsten Gnger, Cornelius Hellge, And Thomas Wiegand, Fraunhofer -
Heinrich Hertz Institute Thomas Stockhammer, Nomor Research Gmbh, Svc-Based Multisource Streaming For
Robust Video Transmission In Mobile Ad Hoc Networks, IEEE Wireless Communications : October 2006.[25]
Meng Guo, Mostafa H. Ammar, Scalable live video streaming to cooperative clients using time shifting and video
patching., IEEE INFOCOM 2004.[26] John G. Apostolopoulos, Wai- tian Tan, Susie J. Ie, Video Streaming:
Concepts, Algorithms, and Systems, HPL-2002-260 September 18th , 2002.[27] P. Hosein, "Broadcasting VBR
traffic in a Wireless network," in Proc. of IEEE Vehicular Technology Conference (VTC'08), Calgary, Canada,
September 2008, pp. 1-5 [28] Local and Metropolitan Area Networks Part 16: Air Interface for Broadband
Wireless Access Systems Broadband Wireless Metropolitan Area Network.[Online]. [29] Available:
http://standards.ieee.org/getieee802/802.16.html http://standards.ieee.org/getieee802/802.16.html[30] Thomas
Locher, Remo Meier, Stefan Schmid, and Roger Wattenhofer. Push-to-Pull Peer-to-Peer Live Streaming. In 21st
International Symposium on Distributed Computing (DISC), Lemesos, Cyprus, September 2007.[31] Pulsar: Live
and On-Demand Streaming Software, June 2008. http://www.getpulsar.com http://www.getpulsar.com.[32] X.
Zhang, J. Liu, B. Li, and T.S.P. Yum. CoolStreaming/DONet: A Data-Driven Overlay Network for E_cient Live
Media Streaming. Proceedings of IEEE INFO-COM , 3:13{17, 2005.[30] H. Schulzrinne, S. Casner, R. Frederick,
and V. Jacobson, "Real time protocol," RFC 3550, Jul. 2003. ng RTP in NS2. November 2007, Volume 7 No.11[
] Z. Liu, Y. Shen, S.S. Panwar, K.W. Ross, and Y. Wang. P2P Video Live Streaming with MDC: Providing
Incentives for Redistribution. Multimedia and Expo, 2007 IEEE International Conference on, pages 48{51, 2007.
[31] D.T. Chen, N. Natarajan, and Y. Sun, "On the Simulation, Modeling, and Performance Analysis of an
802.16E Mobile Broadband Wireless Access System", Communications and Computer Networks, 2005.[32]
Arjun Desai, Raj Jain. RTP, RTCP, RTSP= internet protocol for real time multimedia communication. [36]
P.sankar and Chellamuthu."Virtual Clock Scheduling Algorithm for Video Streaming" in International Journal of
Computer Theory and Engineering, Vol. 1, No. 5, December, 2009.[37] M. Garey and D. Johnson, Computers
and Intractability: A guide to the Theory of NP-Completeness. W. H. Freeman and Company, 1979.[38]
https://www.google.co.in/search?
q=streaming+media&rlz=1c1GNAM_en_gbIN683IN683&espv=2&biw=1024&bin=67
https://www.google.co.in/search?
q=streaming+media&rlz=1c1GNAM_en_gbIN683IN683&espv=2&biw=1024&bin=67[39] R. Iber, M. Guerra, S.
Sawhney, L. Golovanevsky, and M. Kang, "Measurement and analysis of video streaming performance in live
UMTS networks," in Proc. WPMC 2006, San Diego, CA, Sep. 2006, pp. 1-5.[40] Ganesh Venkatesan, Alex
ashlay, Ed Reuses Todor Cooklev. Video Over 802.11 Tutorials, March 2007.[41] Open Mobile Video Coalition
Ibsite. [Online]. Available:http://www.openmobilevideo.com/resources/o mvc-materials/reports[42]
Chandra.R*,Saravanaselvi.P# ,"A Key Issue on Multicast Video Streaming Over WIRELESS Network,
International Journal of Scientific Reserach Publicatios, Vol.2, Issue 12, dec.2012 [43] Supratim Deb, Sharad
Jaiswal, Kanthi Nagaraj,"RealTime Video Multicast in Wireless Networks. [45]
docwiki.cisco.com/wiki/quality_of_service_Networking[46] Behrouz A Forouzan, A book of Data
Communications and Network, 2nd Edition.[47 ]H. Schwarz, D. Marpe, T. Iigand, -Overview of the scalable
video coding extension of the H.264/AVC standard, IEEE Trans. Circuits Syst. Video Technol, vol. 17, no. 9,
pp.1103-1120, Sept. 2007.[48] G. Auwera, P. David, and M. Reisslein. Traffic characteristics of H.264/AVC and
SVC variable bit rate video. Available: http://trace.eas.asu.edu/h264/index.html
http://trace.eas.asu.edu/h264/index.html[49] D. T. Nguyen and J. Ostermann, "Congestion control for scalable
video streaming using the scalability extension of H.264/AVC," IEEE Journal of Selected Topics in Signal
Processing, vol. 1, no. 2, pp. 246-253, Aug. 2007.[50] M. W. Wang, J. Z. Lin, Q. Wang, "Research on MPEG
Video Stream Transmission Performance Based on NS-2", Applied Mechanics and Materials, Vol. 443, pp. 412-
416, 2014[51] D. T. Nguyen and J. Ostermann, "Congestion control for scalable video streaming using the
scalability extension of H.264/AVC," IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 2, pp.
246-253, Aug. 2007.[52] The H.264/AVC Advan ced Video Coding Standard.
[WWWdocument]http://www.fastvdo.com/spie04/spie04-h264OverviewPaper.pdf (Accessed25.11.2009)List of
Publication Neeta Moolani, Minakshi Halder, "To Study of Video Streaming in WiMax using Real Time Routing
Protocols" IJSRD - Vol. 4, Issue 03, 2016, ISSN (online): 2321-0613.Neeta Moolani, Minakshi Halder "Analysis
and comparison of Video Streaming by Varying Protocols" IJARECE - Communicated.102

Plagiarism Detector
Your right to know the authenticity!

Das könnte Ihnen auch gefallen