Sie sind auf Seite 1von 28

School year 2018-2019/ Second semester/ FET / Computer Engineering

Faculty of Engineering and Technology


Department of Computer Engineering

CEF 522: New Network Architecture

Chapter 5: Future of Smart Grid Communications


The Smart Grid can be defined as an electric system that uses information, two-way, cyber-
secure communication technologies, and computational intelligence in an integrated fashion
across the entire spectrum of the energy system from the generation to the end points of
consumption. The availability of new technologies such as distributed sensors, two-way
secure communications, advanced software for data management, and intelligent and
autonomous controllers have opened up new opportunities for changing the energy system.
The main objective is to develop a measurement science toward communication networking
with the specific aim of strengthening modeling capabilities and determining the potential
impact on critical infrastructure.
The fourth industrial revolution known as Industry 4.0 has paved the way for a systematical
deployment of the modernized power grid (PG) to manage continuously growing energy
demand by integrating renewable energy resources. In the context of Industry 4.0, a smart grid
(SG) by employing advanced Information and Communication Technologies (ICTs),
intelligent information processing (IIP) and future oriented techniques (FoT) allows energy
utilities to monitor and control power generation, transmission and distribution processes in
more efficient, flexible, reliable, sustainable, decentralized, secure and economic manners.
Despite providing immense opportunities, SG has many challenges in the context of Industry
4.0 (I 4.0). To this end, this chapter presents a comprehensive presentation on critical smart
grid components with international standards and information technologies in the context of
Industry 4.0. In addition, this study gives an overview of different smart grid applications,
their benefits, characteristics, requirements (Fig. 11).
1

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

Fig. 11. An example of smart grid architecture


5.1. Smart grid in the context of industry 4.0
In the following, we explain the details of smart grid in the context of Industry 4.0.
a) Power grid evolution
Today‘s power grids are continuing to operate using antiquated technologies and systems. The
electricity generation and distribution (EGD) in these PGs can be divided into three different
subsystems, namely generation, transmission, and distribution. The PGs generate electricity
and for long-distance transmission step-up transformers convert it into high voltage electricity
at the transmission substations. Then, medium voltage electricity is acquired by converting
this high voltage electricity at the DG. Finally, the medium voltage is converted into a low
voltage in order to facilitate the customers. For several decades, the electricity flow has
remained unchanged in PG. However, the PGs subsystems have changed with different paces
over time. Therefore, the automation level significantly varies at various components of the
PGs. The EGs has transformed into interconnected grids from a set of isolated PGs. These
PGs are unified into regional or national EGs to handle the unbalanced supply and demand by
employing different duplicated paths for power flow and failures like transmission equipment
or generation plants failures.
2

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

b) Energy management and environmental issues in traditional PG


The operations of the existing PGs are limited and inefficient. In existing PG, because of lack
of efficient energy storage or backup capacity, the supply is required to keep up with the need
which results in forced just-in-time paradigm. Thus, appropriate energy storage and advanced
storage technologies are crucial for PG.
Moreover, during the peak load hours, the energy need variations strain the elderly
infrastructure-based PGs. It causes electricity cut off or quality issues with availability and
reliability, which is really a subject of worry. To cope with the demand, peaker plants based
on non-renewable energy sources have to turn on for additional electricity supply. However,
this approach is inefficient and uneconomical when the average electricity demand is
significantly less than the peak. Also, in the longer run, it may be difficult or even impossible
to match the power supply to this peak demand using the peaker plants because of day-by-day
increasing energy demands. A small control failure is quite fatal in peaker plants to crash the
entire PG.
c) Integration of renewable energy resources and issues in traditional
PG
These above-mentioned emerging crises recently have attracted worldwide attention to seek
alternative renewable energy resources that are durable, environment-friendly and can endure
long-term development of the power industry for reliable and high-quality electricity. The
potential resources for renewable or green energy include hydro, geothermal, solar, wind,
tidal and biomass. Their reliable and efficient integration with PGs in the process of electric
energy generation can omit or significantly reduce the emissions of harmful gases into the
atmosphere in order to slow the adverse effects of climate change. In most parts of the world,
the wind and solar power are available, however, their contribution to PGs is fairly low.
Instead, each of them faces the problem of uncertainty in energy intake in variable
environments.
d) Need for modernized SG
All above-mentioned challenges and issues are the driving force behind the transformation of
the existing PG into an automated SG paradigm is known as Smart Grid Industry (SGI) 4.0. In
SGI 4.0, the key objectives are to provide intelligent electricity by using advanced ICTs
envisioned to offer a variety of advantages in the following areas: emerging economies, 3
renewable energy sources, environmental, efficiency, reliability, security, and safety.

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

Although some novel studies exist guiding the smart grid challenges, however, unfortunately,
they are not addressing the issues in the context of Industry 4.0. The SGI 4.0 paradigm offers
a platform, vision, and architecture of high-quality EGD in the smart grid. The key principals
behind the SGI 4.0 include decentralization, virtualization, real-time capability, service
orientation, modularity, and interoperability. While the fundamental characteristics of SGI 4.0
are data integration, flexible adaptation, secures communication, intelligent self-organizing,
optimization, electricity generation process, service orientation and interoperability. The SGI
4.0 decreases the average energy production price by improving the cost situation of the smart
grid for economic benefits, i.e., the ‗‗Moore‘s Law‘‘. Moreover, SGI 4.0 will provide
environment friendly, high quality, and competitive power generation solutions by using
renewable energy resources. Thus, electricity generation systems (EGSs) to meet varying
energy demands, especially during peak hours will be adaptable, more robust, flexible,
scalable and reliable to deal with customer demands. In SGI 4.0, the control centers, the
transmission infrastructures, and the substations are believed to interact and monitor the
electric devices remotely, employ novel technologies and services to improve the power
quality and to coordinate with their local devices in real-time and selfconsciously manner to
reduce lead times. A fundamental architecture of smart grid in the context of Industry 4.0 is
shown in Fig. 12.

Fig. 12. Shows the basic communication and services architecture for smart grid in the 4
context of Industry 4.0.

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

e) Sustainability of SG
In SGI 4.0, the smart grid systems are capable of collecting data, communicate with
computers for analyzing it and give advice on it to the operator for necessary actions. This
self-cognition, selfoptimization, and self-customization of the SGI 4.0 will enable different
systems of the power grid to operate independently or with less human interventions for high-
quality electricity.
f) Role of information and communication technologies in SG
To this end, the role of ubiquitous information and emerging ICTs infrastructure is very
important for the realization of SGI 4.0. In SGI 4.0, the key aim of ICTs is to build a highly
reliable and flexible communication infrastructure and allow protocols to enable real-time
interactions between producers and consumers in the smart grid. Moreover, in SGI 4.0, all
interconnected power generation and consumer systems through the continuous and the
autonomous exchange of information and data, cooperate closely to realize a digital and
intelligent smart grid.
g) Human role and job opportunities in SG
In SGI 4.0, though industrial processes are more automated, the human still continues to have
a central role to improve real-time grid performance in terms of power quality, productivity,
efficiency, and security. Therefore, it is important that workers and engineers need to improve
their social, professional, personal and methodical competences to achieve higher power
generation efficiency with less cost and fewer resources consumption.
5.2. Quality of service requirements and applications in SG
In SG various types of information from power generation to consumer applications with
different QoS requirements are efficiently moved through the advanced ICTs. It is expected
that by enabling various SG applications will result in the increasing data quantity unit from
Megabyte, Gigabyte, and Terabyte to Petabyte. This massive amount of data from the source
towards destination must be transferred in QoS manner. Hence interoperable, secure and
flexible bi-directional networking technologies that meet each SG component QoS
requirements are essential for the successful realization of SGI 4.0. Table 3 shows the QoS
requirements for various SG applications. In this study, we have highlighted below some of
the basic quantitative and qualitative requirements must be satisfied by the ICTs infrastructure
in a variety of SG applications in SGI 4.0. 5

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

Table 3. QoS requirements of the smart grid applications (s-seconds; ms-milliseconds; h-


hour/s)
a) Quantitative requirements
 Latency
Latency is a measure of data transmission delay between SG components. It is one of the
essential constraints since data needs to transmit immediately. For instance, in some mission-
critical SG applications like WASA or DR compared to AMI or HEMs latency may be not
tolerated. Hence, the networking solutions used for time-critical applications must satisfy the
minimum delay requirements.
 Bandwidth
The low, medium and high radio frequencies have their specific role based on the application
requirements in SG. For short distance communication like in HEMSs, the high and medium
frequency ranges can be used due to their wider bandwidth leading to higher data rates. On
the other hand, low-frequency ranges can be used for high-quality long-distance
communication in various smart grid applications. The lower frequencies compared to higher
frequencies can avoid the LoS problems and can easily penetrate through linear and non-
linear objects.
 Data rates
The smart grid applications are generating various types of data like text, pictures, audio,
video and many others, at different rates. Thus, the choice of an appropriate communication
technology is essential for achieving a reliable and accurate application specific data transfers
in HANs, NANs, and WANs.
6

 Throughput

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

Throughput is the sum of data transferred between smart grid components in a specific time
interval. Throughput used by the SG for communication depends on the application
characteristics. For instance, the throughput required by the SG for AMI or DR
communication is different from the WASA or TLM.
 Reliability
Reliability is an attribute defines how successfully communication systems timely exchange
messages according to its specifications. Therefore, it is extremely important that the
communication nodes always be reliable for successful and timely message exchanges in the
SG. The reliability based on the specifications varies in the SG applications.
b) Qualitative requirements
 Data accuracy
Data accuracy is an essential data quality index that is directly related to the accuracy of the
SG components. The data accuracy has an impact on the applications efficiency, performance
and economic benefits.
 Data validity
It is an important factor showing that how to retrieve useful information from the massive
ocean of data coming from different SG components. In order to provide various services to
customers in a more efficient manner, this data validity should be improved.
 Accessibility
It means the customers must be offered equal opportunities to access SG application services
without discrimination.
 Interoperability
It is obvious that various types of communication technologies and protocols will be
combined to provide efficient data exchange between diverse SG components. Therefore, the
interoperability is essential in both networks and SG applications for guaranteeing standard
data exchange with the same meaning.
 Security
5.3. The Future of Smart Grid Technologies - UC Riverside
To protect the critical data from physical and cyber security attacks is one of the biggest
issues in the SG. In all SG applications, the end-to-end secure two-way communication is
essential for preventing any vulnerability to SG assets. 7
Since the early stage of electricity development, network designs have relied on demand
forecasting, thus allowing the sizing of:

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

 The centralised generation units


 The transmission network, which carries electricity over long distances at high
voltages
 The distribution network, which brings electrical power down to the end-user sites
through low-voltage lines
Over the years, thanks to an interconnected Pan-European transmission network, power plants
with increasing nominal power could be exploited in all power systems all over Europe, in
order to lower the cost of energy generation.
Finally, power electronics will be more and more deployed at generation level (for instance,
full electronic inverters for PV) and within the grid (FACTS devices, DC links, DC networks)
to allow for increased real-time power flow control. This would lower today‘s Pan-European
system inertia making the system even more sensitive to any type of disturbances. The
expected dynamic behaviour of such power system will have to be considered very early in
the planning process, although this is an area still relatively unexplored. At the same time,
novel technology solutions downstreaming the transmission network will open routes for
improved network design and operations. For instance:
 The deployment of smart metering will give TSOs/DSOs more information to
plan/operate the grid on the basis of a better knowledge (space and time wise) of local
consumption/distributed generation.
 Large-scale demand-side management approaches could be developed provided that the
costs of the required infrastructure are affordable and regulatory regimes allow for novel
balancing and settlement approaches.
 The progress in ICT, high computational power and large bandwidth communication
networks at affordable cost will favour advanced monitoring and control of very large
power systems following the current technology trends (e.g. HVDC lines using voltage
source control, modular multilevel converters, dynamic line rating).
 Electricity storage could help generators to better manage variable generation and load as
well as arbitraging for a better economic valorisation of renewable electricity.
A recent European Climate Foundation (ECF) study has addressed electricity production
decarbonisation by 2050 which emphasises the role of the transmission network. The
transmission network planning will have a crucial role in the realisation of the European
8
single electricity market and the decarbonisation process for the next 40 years. It has
highlighted a first appraisal of the Pan-European transmission grid by 2030, in coherence

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

with their 2050 vision, using a back-casting approach, but without an explicit modelling of an
offshore grid.
5.4. Future Concerns of Electricity Systems Worldwide
Transmission networks will have to cope with more uncertainties at all levels (macro-
economic growth, generation and consumption patterns, new power technologies) while
enabling energy players complying with energy policies set at governmental levels. This
requires transmission system operators (TSOs) to simultaneously support the efficient use of
existing transmission infrastructures and the implementation of new efficient infrastructure
investments. Electricity systems will thus progressively evolve as explained in Table 4. (Fig.
13).

Table 4. Evolution of electricity systems

Fig. 13. The paradigm change in electric power systems


9
This paradigm change is made even more complex due to the lack of long-term network
planning methodologies. Top-down planning approaches will be needed, which requires

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

research and development (R&D) on planning techniques to take into account several
irreversible trends.
5.4. ICT opportunities in future Smart Grids
Development of ―Smart grid‖ present one of global priority with numerous of
benefits, which has to be supported and promote by leading industries and governments
institutions. Information and Communication Technologies (ICT) sector has to play important
role because of development of modern telecommunication.
Example of ICT using is to empower today‘s power grid with the capability of supporting
two-way energy and information flow, facilitating the integration of renewable energy into the
grid and empowering the consumer with tools for optimizing energy consumption and costs.
Some of benefits are reduce peak demand, lower energy consumption, etc. In this paper is
present integration of ICT into the traditional power distribution infrastructure, development
and implementation of topics necessary for efficient management of the smart grid network,
and smart grid architecture.
The other example is, to use renewable energy sources on telecom sites, which are element of
smart grid. In telecommunication, in nowdays, the renewable energy sources, because of the
high cost for Wh, are generally used in remote areas where the public mains is unavailable.
Most of the world‘s electricity system was built when primary energy was relatively
inexpensive. Grid reliability was mainly ensured by having excess capacity in the system,
with unidirectional electricity flow to consumers from centrally dispatched power plants.
Investments in the electric system were made to meet increasing demand—not to change
fundamentally the way the system works. While innovation and technology have dramatically
transformed other industrial sectors, the electric system, for the most part, has continued to
operate the same way for decades. This lack of investment, combined with an asset life of 40
or more years, has resulted in an inefficient and increasingly unstable system. For modern
smart grid new architectures has do be desing. Requirements for smart grid are significant
different, comparing to ―old fashion‖ electricity system. These changes require a new, more
intelligent system that can manage the increasingly complex electric grid. The electrification
of the transportation system and the increasing reliance on renewable resources will require
new models for such loads and generators, new architecture for their efficient and secure
interactions through the cyber-physical infrastructure. New architectures also has to allow to
10
broker in real time the flexible smart loads demand while incorporating increasing volatility in
the energy provision, associated with green energy sources and decentralized generation.

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

In smart grid, different user can be involved in energy ―delivering‖ to the grid. The basic
prerequisites imposed to power systems are related to their safety, long life and
uninterruptible power.
An example of telecom hybrid power system as element of smart grid architecture is given in
Fig. 14.

Fig. 14. Telecom hybrid power system as element of smart grid


This new architectures require development of two-way communication system, computing
and smart grid devices. Role of ICT is to ensure perform, secure and reliable communication
and control of all smart grid elements as:
 Power generation,
 Transmission system,
 Transmission substation,
 Distribution system,
 Distribution substation,
 Different renewable energy sources,
 Storage system,
 Residential customer,
 Commercial customer,
 Industrial customer.

Chapter 6: Routing in Machine to Machine (M2M) and Future Networks


Two different types of approaches to connected devices are making headlines in today‘s IT
world. One is machine-to-machine (M2M) processes, which focuses on connecting 11

manufacturing devices and equipment in a physical production space, and the internet of

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

things (IoT), a much broader term for a new reality where nearly everything we use has a chip
inside it connecting it to the global internet.
The Internet technology has undergone enormous changes since its early stages and it has
become an important communication infrastructure targeting anywhere, anytime connectivity.
Historically, human-to-human (H2H) communication, mainly voice communication, has been
the center of importance. Therefore, the current network protocols and infrastructure are
optimized for human-oriented traffic characteristics. Lately, an entirely different paradigm of
communication has emerged with the inclusion of "machines" in the communications
landscape. In that sense, machines/devices that are typically wireless such as sensors,
actuators, and smart meters are able to communicate with each other exchanging information
and data without human intervention. Since the number of connected devices/machines is
expected to surpass the human-centric communication devices by tenfold, machine-to-
machine (M2M) communication is expected to be a key element in future networks. With the
introduction of M2M, the next generation Internet or the Internet-of-Things (IoT) has to offer
the facilities to connect different objects together whether they belong to humans or not.
6.1. Next Generation M2M Cellular Networks: Challenges and Practical
Considerations
The ultimate objective of M2M communications is to construct comprehensive connections
among all machines distributed over an extensive coverage area. Recent reports show that the
projected number of connected machines/devices to the IoT will reach approximately 50
billions by 2020 (Fig. 15). This massive introduction of communicating machines has to be
planned for and accommodated with applications requiring wide range of requirements and
characteristics such as mobility support, reliability, coverage, required data rate, power
consumption, hardware complexity, and device cost. Other planning and design issues for
M2M communications include the future network architecture, the massive growth in the
number of users, and the various device requirements that enable the concept of IoT. In terms
of M2M, the future network has to provide machine requirements as power and cost are
critical aspects of M2M devices. For instance, a set-and-forget type of application in M2M
devices such as smart meters require very long battery life where the device has to operate in
an ultra low-power mode. Moreover, the future network should allow for low complex and
low data rate communication technologies which provide low cost devices that encourage the 12
large scale of the IoT. The network architecture, therefore, needs to be flexible enough to
provide these requirements and more. In this regard, a considerable amount of research has

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

been directed towards available network technologies such as Zigbee (IEEE 802.15.4) or
WiFi (IEEE 802.11b) by interconnecting devices in a form of large heterogeneous network.
Furthermore, solutions for the heterogeneous network architecture (connections, routing,
congestion control, energy-efficient transmission, etc.) have been presented to suit the new
requirements of M2M communications. However, it is still not clear whether these
sophisticated solutions can be applied to M2M communications due to constraints on the
hardware complexity.

Fig.15. Expected number of connected devices to the Internet. This chart is obtained from
recent reports developed by both Cisco and Ericsson. The reports discuss the expected growth
in the number of connected devices by 2020 due to the introduction of the M2M market.
With the large coverage and flexible data rates offered by cellular systems, research efforts
from industry have recently been focused on optimizing the existing cellular networks
considering M2M specifications. Among other solutions, scenarios defined by the 3rd
Generation Partnership Project (3GPP) standardization body emerge as the most promising
solutions to enable wireless infrastructure of M2M communications. In this front, a special
category that supports M2M features has been incorporated by the 3GPP to Long-Term-
Evolution (LTE) specifications. Due to the M2M communication challenges and the wide
range of supported device specifications, developing the features for M2M communication, 13
also refers to machine-type-communication (MTC) in the context of LTE, started as early as
release 10 (R10) for the advanced LTE standard. This continued to future releases including

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

release 13 (R13) that is currently developed and expected to be released in 2016. For these
reasons, in this article, we will focus on the cellular MTC architecture based on the LTE
technology as a key enabler with wide range of MTC support.
Due to the radical change in the number of users, the network has to carefully utilize the
available resources in order to maintain reasonable quality-of-service (QoS). Generally, one
of the most important resources in wireless communications is the frequency spectrum. To
support larger number of connected devices in the future IoT, it is likely to add more degrees
of freedom represented in more operating frequency bands. However, the frequency spectrum
is currently scarce and requiring additional frequency resources makes the problem of
supporting this massive number of devices even harder to solve. In fact, this issue is
extremely important especially for the cellular architecture since the spectrum scarcity
problem directly influences the reliability and the QoS offered by the network. To overcome
this problem, small cell design, interconnecting the cellular network to other wireless
networks, and cognitive radio (CR) support are three promising solutions.
6.2. Machine-to-Machine (M2M) Communications in Vehicular
Networks
When two electronic systems communicate autonomously, that is to say without human
intervention, the process is described as Machine-to-Machine (M2M) communications. The
main goal of M2M communications is to enable the sharing of information between electronic
systems autonomously. Due to the emergence and rapid adoption of wireless technologies, the
ubiquity of electronic control systems, and the increasing complexity of software systems,
wireless M2M has been attracting a lot of attention from industry and academia.
M2M communications include wired communications, but recent interest has focused on
wireless M2M communications. Moreover, the number of wireless devices (not including
mobile phones) which operate without human interaction (e.g. weather stations, electricity
meters, point of sales devices), is expected to grow to 1.5 billion by 2014.
Recent research efforts on M2M communications have investigated areas that include
network energy efficiency and green networking, the tradeoff between device power
consumption and device intelligence or processing power, standardization of communications,
data aggregation and bandwidth, privacy and security, and network scalability.
Although "M2M architecture" can technically refer to any number of machines 14
communicating (e.g. based on the fairly broad definition given by the European
Telecommunications Standards Institute, ETSI), it is generally accepted that M2M principles

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

apply particularly well to networks where a large number of machines is used, even up to the
1.5 billion wireless devices of the future. This means that when an M2M application is
discussed, it is generally presented on a national or global scale with a multitude of sensors
that are centrally coordinated. Consequently, when we evaluate a vehicular network as a type
of M2M architecture, this evaluation is done for a large-scale network that potentially has
many other devices connected to it, some of which may not be related to vehicular networks.
In this work we explore how vehicular networks can leverage the M2M paradigm to support
vehicular communications. We present a brief overview of some of the most recent
application areas of M2M, namely smart grid technology, home networking, health care, and
vehicular networking. The M2M network communication layers in a vehicular network are
then explored. We identify five fundamental M2M concepts that have been reported in the
literature to address vehicular communication challenges, such as the support for large-scale
deployment of devices, cross-platform networking, autonomous monitoring and control,
visualization of the system, and security.

Several research efforts have been investigating M2M communications support for vehicular
networks. In a vehicular network, vehicles communicate with other vehicles (Vehicle-to-
Vehicle (V2V)) or with an infrastructure (Vehicle-to-Infrastructure (V2I)). Applications for
vehicular networks can be divided into four broad categories, namely safety and collision
avoidance, traffic infrastructure management, vehicle telematics, and entertainment services
and Internet connectivity.
For the most part, previous vehicular network research has focused on point-to-point
communications or support for a limited set of vehicles. From a network architecture point of
view, this focus constitutes a bottom-up approach, where focus is initially placed on
developing routing protocols, PHYsical layer (PHY), Medium Access Control (MAC) layer,
and broadcasting. M2M networks use centralized and autonomous monitoring, and control of
data-collecting devices that are spatially distributed, using a hierarchical top-down approach.
The M2M paradigm can therefore improve vehicular networks' capacity to support features
such as cross-platform networking, autonomous monitoring and control, and visualization of
the devices and information in the M2M network. Given the potential that M2M holds for
vehicular networks, we focus on M2M communications for vehicular networks in this work.
15
To explain how vehicular networks are used, and to better understand the benefits of M2M
vehicular networks, four application areas for vehicular networking are presented below.

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

a) Safety and Collision Avoidance


When driving a non-networked vehicle, a driver makes decisions based only on information
within the driver‘s Line of Sight (LoS). One of the fundamental reasons for using wireless
M2M communication between vehicles is to avoid accidents by overcoming the LoS
limitation. In the event of an emergency, information from emergency-detecting sensors (such
as accelerometers and the braking system) is sent to other vehicles and to the road-side
communication infrastructure within its communication range.
b) Traffic and Infrastructure Management
Road congestion is a common problem faced by many drivers around the world every day.
The number of vehicles on many countries' roads has quickly outgrown the ageing
infrastructures. The impact of congestion is not limited to personal discomfort. Congestion
leads to increased fuel consumption, which in turn increases costs, and increased emissions,
which leads to an increase in pollution. Conversely, a better managed and tightly controlled
infrastructure improves productivity and reduces costs and pollution. This provides a strong
incentive for society to use road infrastructures more efficiently.
c) Vehicle Telematics
Vehicle telematics is used to monitor and control vehicles remotely, for example to track or
manage vehicle fleets, or to recover stolen vehicles. Physical properties of the vehicle are
recorded and transmitted to a central coordinating agent. These properties include Global
Positioning System (GPS) information, internal engine parameters such as engine
temperature, or video captured by an externally mounted camera. Bandwidth requirements are
not high when video capture is not required, and second-generation cellular data services (e.g.
General Packet Radio Service (GPRS) and Short Messaging Service (SMS)) are sufficient for
simple tracking applications. There is, however, a growing trend to support video telematics.
Preliminary studies show that image capture and video streaming is possible from a vehicle
with access to a high-bandwidth wireless technology such as WiMAX or Wi-Fi.
d) In-car Entertainment and Internet Services
The ubiquity of wireless area networks (e.g. Wi-Fi), cellular broadband networks (e.g. 3rd-
Generation (3G) networks) and metropolitan area networks (e.g. WiMAX) makes it possible
to reliably deliver online content and entertainment to vehicles. The wireless network
requirements are dependent on the application, which can broadly be categorized as: 16
 Static low-bandwidth (e.g. text-based e-mail, news, RSS feeds, social media);
 Static high-bandwidth (e.g. Google Street View, Torrent-based file sharing); and

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

 Streaming video and audio, including videoconferencing (e.g. Skype).


Static content is less susceptible to bandwidth fluctuations and in some cases can be tolerant
of substantial transfer delays. Streaming content requires steady high bandwidth with quick
handover and a large receiving buffer, but the occasional loss of packets is acceptable.

Chapter 7: Networking Protocols for Future Networks


The existing set of network protocols have worked surprisingly well, and attempts at
improving them have frequently resulted in no improvement. However, some smart people
have identified significant changes in fundamental protocols that can simplify networking.
The real question is whether the changes are significant enough to justify their adoption.
If what we have is working so well, why do we need new protocols? Just examine the layers
of functionality that are required to implement security, NAT, QoS, and content management.
It gets complicated very quickly as the layers interact with one another. Some of the proposed
protocols potentially result in network simplification. IPv6 was supposed to improve security,
but further examination revealed some significant security holes, such as neighbor discovery
and automatic tunneling. The new protocols for Future Networks are:
 Named Data Networking (NDN)
 Recursive InterNetwork Architecture (RINA)
 Enhanced IP
 Easy IP (EZIP)
7.1. Composition of Self Descriptive Protocols for Future Network
The Internet is a tremendous success story and with it the TCP/IP protocols suite, which is the
core technology of the Internet. Using the Internet requires to use the TCP/IP protocols. Thus
it is hard to change or even modify these protocols. Driven by the demands of ever emerging
applications and the capabilities of new communication networks, many workarounds have
been introduced, like sub-layer proliferation (e.g. MPLS at layer 2.5, IPsec at layer 3.5, and
TLS at layer 4.5), and erosion of the end-to-end model (middle-boxes, such as firewalls,
NATs, proxies, caches, etc.). This results in increasing complexity and unpredictable
vulnerabilities making it even harder to introduce new technologies.
As a consequence the introduction of a Future Internet with a newly designed architecture is
discussed in the research community. One of several requirements, that should be fulfilled by 17
a future network architecture, is the ability to evolve, i.e. to add, change and remove
functionality more easily than today. Just defining a new set of protocols for a future Internet

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

is not sufficient, because it will be impossible to take into account all future demands. In
consequence even a completely new designed future Internet will be subject to ongoing
evolution. This demand can not be achieved by a single component or protocol, but must be
supported by a new architecture that defines the fundamental organization of a system, the
relationship of components, and design and evolution principles according to. Especially the
definition of evolutionary principles allowing deliberate extensions and replacement of
functionality are important to avoid an architectural patchwork similar to today‘s Internet.
In order to enable evolution of network functionality in large scale networks it must be
possible to change functionality of an individual system (network node) and in addition there
must be concepts and methods to handle the resulting heterogeneity. The heterogeneity will
be inevitable, as in large scale networks it is not possible to synchronize changes because of
technical, logistic and even political constraints. We expect that achieving both goals will be
the key for a flexible network architecture that is able to evolve.
Developing concepts and techniques for more flexible networks have been a research topic
since many years now, but with varying focuses. In the 90‘s, there have been several
publications about automatic protocol configuration based on micro-protocols. Examples of
these are the ―Dynamic Network Architecture‖, DaCaPo and FuKSS. All these approaches
compose complex protocols using micro-protocols. In our approach we also use micro-
protocols, but use more abstract descriptions and different methods for composing and
controlling interactions of protocols.
7.2. New Framework and Protocol for Future Networking Applications
Today‘s Internet has been a massive success, providing connectivity for a wide variety of
applications on a planetary scale. However, in recent years a number of new trends have
begun to emerge that indicate fundamental shifts in requirements that future networks will
need to be able to address and that will require pushing the boundaries of today‘s Internet
technology. The same trends also point to new possibilities and business opportunities that
can be unleashed by technology that meets those new challenges. These trends include the
need for ever more ambitious service level guarantees for applications such as networked
Virtual or Augmented Reality (VR/AR), Tactile Internet, or Industrial Internet. Another trend
concerns the increasing need for abilities that allow to adapt and customize adapt network
behavior to particular applications and deployments, not just by vendors or operators but by 18
applications and end users. Other trends include the emergence of private networks that move
contents and data ever-closer to users, as well as the need for greatly simplified network
operations.
Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

Fundamentally, many of these trends are putting the best-effort nature of today‘s Internet to
the test, as they point towards the need for high-precision networks: high precision in terms of
service level guarantees that can be given, in terms of packet loss that is avoided, in the way
resources are being allocated and accounted for, and in terms of behavior that is custom-
guided by the needs of individual services and applications as opposed to being determined by
best-effort principles. While incremental and fragmented technologies are emerging that cater
to different aspects and use cases for each of these trends, what has been lacking so far is a
comprehensive, holistic approach that rethinks how networking services are provided. This is
what Big Packet Protocol (BPP) delivers.
BPP is a novel approach to packet-based networking, intended to address the limitations
current internetworking technology is faced with. It introduces a new protocol, BPP, as well
as the framework to support it. BPP introduces the concept of a BPP Block, a block of data
that is carried as part of data packets in addition to header information and user payload. This
data includes metadata as well as commands that collectively provide guidance to nodes in the
network for how to handle packets and flows. BPP Blocks can be injected or stripped at the
edge of a network to interwork seamlessly with existing network technology. BPP does not
commit to a particular ―host protocol‖; it can be used at Layer 3 with IPv6 or IPv4 or even at
Layer 2.
BPP allows networking services to be programmed from the edge of the network by
information carried in packets and flows without requiring access to a network provider‘s
control plane, without need for administrative access to network devices to reprogram them,
and without dependency on lengthy product development or standardization cycles. This
programming occurs at the level of the individual packet and flow. BPP packets can affect
their own behavior and that of their flow but not the behavior of other packets or flows, in
similar ways as tenants in a cloud can program only their own virtual resources and not those
of other tenants. BPP consists of four cornerstones:
 The BPP protocol itself defines how packets carry information that guides their
processing;
 BPP stateful extensions involving so-called statelets allow packets to retain and
interact with state in the network;
 BPP infrastructure provides the on-device execution environment for packet
19
commands and statelets;
 BPP network operations provide the functionality required to safely deploy, monitor,
and run BPP applications.

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

7.3. Transport and Networking: Future Internet Trends


The Internet is evolving from the interconnection of physical networks by a collection of protocols
towards what is considered the future Internet: a network of applications, information and content.
Some characteristics of the future Internet:
 The future Internet can be seen as a network of applications.
 It will enable ―peer productivity‖ and becomes an ―architecture for participation‖.
 In particular, the future Internet will be based on interactive, edge-based applications and
overlays, such as peer-to-peer (P2P) content distribution, Skype, MySpace, or YouTube.
 However, it is not yet sure what the next major application in the future Internet is.
Recent advances in disruptive technologies, such as P2P content distribution (cf. Fig. 16), and
systematic research in high-speed optical and wireless transmission and in the virtualization of links
and routers have sparked fundamental discussions about how to design the architecture of the future
Internet:
 Is a clean-slate approach mandatory to facilitate new network and application architectures for
the future Internet?
 Would an evolutionary process for designing the future Internet be more appropriate?
 What kind of network and application features will drive the design of the future Internet?
Giving a definite answer to these questions is audacious and impossible. However, technological
challenges in networking and network applications can be identified and their implications considered
for the future design.

Fig. 16. Disruptive development in network design


a) Trends
Today‘s Internet does not really show a general overload situation which would ask for a new network
architecture. Actually, it performs quite well, as examples such as P2P file-sharing show. Still, some
recent remarkable developments and ideas for using the system call for new network and application
architectures and for new ways of operating the system. 20
- Edge-based services and applications
The services in classical communication networks, such as ISDN or GSM, are rather platform-
dependent. The increased application of abstraction layers, like the Internet protocol (IP) or overlay

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

techniques, permits services to be consumed now in a variety of wireless and wireline networks such
as ADSL, WLAN, or UMTS. Hence, the transition from single network-centric services to
application-centric multi- network services has occurred, cf. Fig. 17.

Fig. 17. Emerging to multi-network service


- High-speed data transport
Advanced optical core networks using dense wavelength division multiplexing (DWDM) or hybrid
optical network architectures have brought tremendous amounts of flexible point-to-point transmission
capacity into core networks. Fibre-based access technologies, such as Ethernet passive optical
networks (EPON), permit to deliver this capacity to end users at very low cost.
- Network and service control and management
The need for fast responses on failures and the reduction of operational costs (OPEX) led to the
development of autonomous procedures for network and service operation.
7.4. Technical Challenges for IP 2020
a) Overlays for participation
Similar to the current trend of user-generated content, future networks will increasingly derive their
applications, services and infrastructures from user-generated contributions. This paradigm refers
mainly to content but also to hardware, as the FON project has recently shown for WLAN access
21
points.
An easy tool for integrating widely distributed contributions are virtual networks, so-called overlays.
Thus, a major challenge for the architecture of the future Internet is the support of overlays for

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

participation. Edge nodes should be enabled to form overlays of coordinated communities. They
require mechanisms to define overlays with application specific name spaces, routing and self-
organizing procedures for topology and resource management, cf. Fig.18.

Fig. 18. Overlay networks.


b) High-speed data transport: future access systems, core network and
routing architecture
A major challenge for the future Internet is the heterogeneity of access technologies. For example,
future mobile devices will move through a landscape of different wire line and wireless access systems
and operators. Moreover, the number of access points and their inter-connectivity may fluctuate
permanently since the node may fail or the operators interconnect them on demand when additional
coverage is needed. Self-organizing vertical handover mechanisms are needed for bridging the
heterogeneity between the access technologies and have to be executed in very few time. Hence, the
architecture of the future Internet requires scalable mobility management mechanisms.
c) Future service control and management
Current self-organization mechanisms for applications and services are typically designed for
end-user constraints where the consumption of network resources is of minor interest and for
the optimization of a single objective. In the future Internet, however, these algorithms need
to consider network resources, multiple stakeholders, and objectives.
Future reliable edge-based services require the provisioning of checkabel resources. Hence,
flexible service-level agreements (SLAs) are needed for negotiating and validation of the
resource quality. The new SLAs should address the combination and encapsulation of the 22
provided services, their provisioning on small time scales, and meaningful quality concepts,
e.g. QoE.

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

d) Future layering and abstraction architecture


Today‘s Internet architecture is largely based on the hourglass concept of the IP where every
data is transported over the IP protocol and any IP packet could be transported over every
network. The so-called ―IP waist‖ increasingly constitutes a bottleneck in today‘s Internet
architecture.
e) Scientific challenges
The above described technical requirements reveal two areas of scientific challenges in the
design of the future Internet. The first category of challenges aims at reorganizing the layering
structure of today‘s Internet and at developing new algorithms and architectures which permit
edge nodes to control the communication and routing.

Chapter 8: Fusion of Future Networking Technologies and Big Data / Fog


Computing
The Internet of Things (IoT) is one of the spotlight innovations which has the potential to
provide unlimited benefits to our society. The development of the IoT is about to reach a stage
at which many of the objects around us will have the ability to connect to the Internet to
communicate with each other without human intervention. Originally, the IoT was intended to
reduce human data entry efforts and use different types of sensors to collect data from the
environment and permit automatic storage and processing of all these data.
As the IoT is characterized by limited computations in terms of processing power and storage,
it suffers from many issues such as performance, security, privacy and reliability. The
integration of the IoT with the cloud, known as the Cloud of Things (CoT), is the right way to
overcome most of these issues. The CoT simplifies the flow of IoT data gathering and
processing and provides quick, low-cost installation and integration for complex data
processing and deployment.
The integration of IoT with cloud computing bringsmany advantages to different IoT
applications. However, as there are a large number of IoT devices with heterogeneous
platforms, the development of new IoT applications is a difficult task. This is because IoT
applications generate huge amounts of data from sensors and other devices. These big data are
subsequently analyzed to determine decisions regarding various actions. Sending all these
data to the cloud requires excessively high network bandwidth. To overcome these issues, fog 23

computing comes into play. The term fog computing was coined by Cisco. It is a new
technology that provides many benefits to different fields, especially the IoT. Similar to the

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

cloud, fog computing provides services to IoT users such as data processing and storage. Fog
computing is based on providing data processing capabilities and storage locally to fog
devices instead of sending them to the cloud. Both the cloud and fog provide storage,
computing and networking resources.
8.1. Emerging Trends in Cloud Computing, Big Data, Fog Computing,
IoT
The integration of the cloud with the IoT, known as CoT, has many benefits. For instance, it
helps to manage IoT resources and provides more cost-effective and efficient IoT services. In
addition, it simplifies the flow of the IoT data and processing and provides quick, low-cost
installation and integration for complex data processing and deployment.
The CoT paradigm is not straight forward, it also introduces new challenges to the IoT system
that cannot be addressed by the traditional centralized cloud computing architecture, such as
latency, capacity constraints, resource-constrained devices, network failure with intermittent
connectivity and enhanced security
In addition, the centralized cloud approach is not appropriate for IoT applications where
operations are time-sensitive, or Internet connectivity is poor. There are many scenarios
where milliseconds can have serious significance, such as telemedicine and patient care. This
is the same scenario for vehicle-to-vehicle communications, where avoiding collisions or
accidents cannot tolerate the latency caused by the centralized cloud approach. Therefore, an
advanced cloud computing paradigm that improves the capacity and latency constraints is
required to handle these challenges. Cisco suggested new technology called fog computing to
address most of these challenges.
8.2. Big Data and Fog Computing
Fog computing is a paradigm with limited capabilities such as computing, storing and
networking services in a distributed manner between different end devices and classic cloud
computing. It provides a good solution for IoT applications that are latency-sensitive.
Although the term was originally coined by Cisco, fog computing has been defined by many
researchers and organizations from a number of different perspectives.
These tasks can be for supporting basic network functions or new services and applications
that run in a sandboxed environment. Users leasing part of their devices to host these services
get incentives for doing so‖. Fog computing is also defined by the OpenFog Consortium as; ―a 24
system-level horizontal architecture that distributes resources and services of computing,
storage, control and networking anywhere along the continuum from Cloud to Things‖.

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

a) Characteristics of Fog Computing


Essentially, fog computing is an extension of the cloud but closer to the things that work with
IoT data. As shown in Fig. 19, fog computing acts as an intermediary between the cloud and
end devices which brings processing, storage and networking services closer to the end
devices themselves. These devices are called fog nodes. They can be deployed anywhere with
a network connection. Any device with computing, storage and network connectivity can be a
fog node, such as industrial controllers, switches, routers, embedded servers and video
surveillance cameras.

Fig.19. Fog computing is an extension of the cloud but closer to end devices
8.3. Future Internet applications in IoT
The Internet of Things (IoT) provides connectivity for anyone at any time and place to
anything at any time and place. With the advancement in technology, we are moving towards
a society, where everything and everyone will be connected. The IoT is considered as the
future evaluation of the Internet that realizes machine-to-machine (M2M) learning. The basic
idea of IoT is to allow autonomous and secure connection and exchange of data between real
world devices and applications. The IoT links real life and physical activities with the virtual
world.
The numbers of Internet connected devices are increasing at the rapid rate. These devices 25
include personal computers, laptops, tablets, smart phones, PDAs and other hand-held
embedded devices. Most of the mobile devices embed different sensors and actuators that can

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

sense, perform computation, take intelligent decisions and transmit useful collected
information over the Internet. Using a network of such devices with different sensors can give
birth to enormous amazing applications and services that can bring significant personal,
professional and economic benefits.
The Internet has tremendously evolved in the last few years connecting billions of things
globally. These things have different sizes, capabilities, processing and computational power
and support different kind of applications. Thus, the traditional Internet merges into smart
future Internet, called IoT. The generic scenario of IoT is shown in Fig. 20. The IoT connects
real world objects and embeds the intelligence in the system to smartly process the object
specific information and take useful autonomous decisions. Thus, IoT can give birth to
enormous useful applications and services that we never imagined before. With the
advancement in technology, the devices processing power and storage capabilities
significantly increased while their sizes reduced.

Fig.20. The IoT generic scenario


These smart devices are usually equipped with different type of sensors and actuators. Also
these devices are able to connect and communicate over the Internet that can enable a new
range of opportunities. Moreover, the physical objects are increasingly equipped with RFID
tags or other electronic bar codes that can be scanned by the smart devices, e.g., smart phones
or small embedded RFID scanner. The objects have unique identity and their specific
information are embedded in the RFID tags. In 2005, the International Telecommunications
Union (ITU) proposed that "Internet of Things" will connect the real world objects in both a 26

sensory and intelligent manner. Fig. 21 shows basic IoT system implementing different type
of applications or services. The things connect and communicate with other things that

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

implement the same service type. The basic simplified workflow of IoT can be described as
follows:
 Object sensing, identification and communication of object specific information. The
information is the sensed data about temperature, orientation, motion, vibration,
acceleration, humidity, chemical changes in the air depending on the type of sensors.
A combination of different sensors can be used for the design of smart services.
 Trigger an action. The received object information is processed by a smart
device/system that then determines an automated action to be invoked.
 The smart device/system provide rich services and includes a mechanism to provide
feedback to the administrator about the current system status and the results of actions
invoked.

Fig.21: Basic IoT system.

Conclusion
In this course, we learned definitions of network analysis, architecture, and design; the
importance of network analysis in understanding the system and providing a defensible
architecture and design; and the model for the network analysis, architecture, and design
processes. The integration and convergence of traditional telecommunication networks and
the Internet is realized with the standardization of the Next Generation Network (NGN)
concept by ITU‐T. However, the NGN is not something that happens in a given moment of
time, but it is an ongoing process, an umbrella of principles and recommendations for synergy
of different existing and future technologies, which grows and adapts to the continuously 27
changing environment regarding the technologies as well as business and regulation aspects.
Finally, migration to NGN and future networks brings many challenges to network and

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .
School year 2018-2019/ Second semester/ FET / Computer Engineering

service providers, telecommunications and media regulators, equipment vendors, and other
related business segments, but at the same it provides endless possibilities for rapid
innovation of new networks, protocols and services.

Bibliography:
1. Future Networks: Architectures, Protocols, and Applications, IEEE access, 2017
2. Richard Li et al. New Framework and Protocol for Future Networking Applications,
Proceedings of the 2018 Workshop on Networking for Emerging Applications and
Technologies Pages 21-23
3. Abdelmohsen Ali, Walaa Hamouda, and Murat Uysal:Next Generation M2M Cellular
Networks:Challenges and Practical Considerations, arXiv, 20 June 2015
4. Raul Aquino-Santos, Víctor Rangel-Licea and Arthur Edwards Block Emerging
Technologies in Wireless Ad-hoc Networks: Applications and Future Development,
IGI Global's Online Bookstore 2010
5. Alrawais, A.; Alhothaily, A.; Hu, C.; Cheng, X. Fog Computing for the Internet of
Things: Security and Privacy Issues. IEEE Internet Comput. 2017, 21, 34–42.
[CrossRef]

28

Proposed by Mr. Hugues M. KAMDJOU & Mr. SOP DEFFO Lionel Landry | .

Das könnte Ihnen auch gefallen