Beruflich Dokumente
Kultur Dokumente
The core layer provides routing services to other parts of the data center, as
well as to services outside of the data center such as the Internet,
geographically separated data centers and other remote locations.
This model scales somewhat well, but it is subject to bottlenecks if uplinks
between layers are oversubscribed. This can come from latency incurred as
traffic flows through each layer and from blocking of redundant links
(assuming the use of the spanning tree protocol, STP).
Three-layer designs are falling out of favor in modern data center networks,
despite their ubiquity and familiarity. What's taking over? Leaf-spine
topologies.
As organizations seek to maximize the utility and utilization of their
respective data centers, there's been increased scrutiny of mainstream
network topologies. "Topology" is the way in which network devices are
interconnected, forming a pathway that hosts follow to communicate with
each other.
The standard network data center topology was a three-layer architecture:
the access layer, where users connect to the network; the aggregation
layer, where access switches intersect; and the core, where aggregation
switches interconnect to each other and to networks outside of the data
center.
ETHAN BANKS
The design of this model provides a predictable foundation for a data center
network. Physically scaling the three-layer model involves identifying port
density requirements and purchasing an appropriate number of switches for
each layer. Structured cabling requirements are also predictable, as
interconnecting between layers is done the same way across the data
center. Therefore, growing a three-layer network is as simple as ordering
more switches and running more cable against well-known capital and
operational cost numbers.
with hosts attached to another access switch, the uplinks between the access layer
and aggregation become a potential -- and probable -- congestion point. Three-tier
network designs often exacerbate the connection issue. Because spanning-tree
blocks redundant links to prevent loops, access switches with dual uplinks are only
able to use one of the links for a given network segment.
Adding more bandwidth between the layers in the form of faster inter-switch links
helps overcome congestion in the three-layer model scale, but only to a point. The
problems with host-to-host east-west traffic don't occur one conversation at a time.
Instead, hosts talk to other hosts all over the data center at any given time, all the
time. So while adding bandwidth facilitates these conversations, it's only part of the
answer.
ETHAN BANKS
slots that can be filled with line cards to meet port density requirements. Chassis
switches tend to be costlycompared to fixed configuration switches. But chassis
switches are necessary in traditional three-layer topologies where large numbers of
switches from one layer connect to two switches at the next layer. Leaf-spine
allows for interconnections to be spread across a large number of spine switches,
obviating the need for massive chassis switches in some leaf-spine designs. While
chassis switches can be used in the spine layer, many organizations are finding a
cost savings in deploying fixed-switch spines.
Leaf-spine is currently the favored design for data center topologies of almost any
size. It is predictable, scalable and solves the east-west traffic problem. Any
organization whose IT infrastructure is moving towards convergence and high
levels of virtualization should evaluate a leaf-spine network topology in their data
center.
ETHAN BANKS
Introduction
Data center infrastructure now extends beyond the brick and mortar walls to public
clouds, where data that was once considered too sensitive to leave the building is
now hosted. Servers have been consolidated and fully virtualized, and new levels
of integration achieved with converged infrastructure systems. The data center has
to be highly scalable and systems need to be faster every year. IT professionals
within the data center have to be jacks of all trades. And of course, the business
side demands all of that on a minimal IT budget.
In this guide, we present all the tenets of a modern data center strategy, including
the latest in data center infrastructure, operations and management. We also cover
the changing role of IT, important IT skills of tomorrow and the tools IT pros need
to manage a hybrid environment.
Tip
News
News
News
Feature
Tip
Tip
News
News
Tip
News
Feature
News
Tip
Tip
Tip
4Crunching numbers
Big data, data center applications
Big data has established its place in the enterprise. Staying on top of the latest
software and hardware changes ensures that your company will make the most of
the data deluge.
News
Feature
Feature
News
Metrics tools that examine energy efficiency provide broad measurements that are helpful for enterprises that
want to minimize energy use. Continue Reading
Opinion
Feature
Tip
Tip
Feature
Their choice will likely come down to price and vendors' technological
offerings, but also existing relationships.
"People are willing to look at their traditional network vendors now doing
SDN because a relationship has already been established and a trust
level found," Burke said. Purchasing SDN from a familiar vendor eliminates
the fear of a new company being acquired or disappearing.
Burke added that those looking to new SDN vendors are not concerned
with protecting existing relationships or trying to make SDN fit into a current
vendor's marketing or sales strategies.
"[New vendors] offer a way to make a dramatic change and to shift rapidly
to a network driven with and by orchestration tools and automation," he
said.
Cumulus Linux switches are more of a server with an OS that has capabilities to process
network packets and control the network processor which can be aBroadcom BRCM
+0.00% chip, for example. The programmability provides the hidden benefit for users you
can now configure and manipulate switches en-masse using scripts like Puppet, Chef or
CF Engine. So from the point of view of an SDN partner to Cumulus
Networks of VMware VMW -1.69% , it means that the network device is also as programmable as
pure software components.
I dont want to pit one analysis against another but last year
I wrote about a Credit Suisse report that deeply investigated
what networking disaggregation would mean for
Ciscos business generally, and networking business in
particular. Suffice it to say that the report pointed to huge
margin pressure upon Cisco that will come from networking
disaggregation. Add to that the customerbenefit that users
can now reuse the same operational tools for networking that
theyre using for their compute and you have a buy-side and a
sell-side economic impact that is hard to argue against.
3) As a related point, separating hardware from software also complicates trouble-shooting.
Whether you have white boxes or Cisco switches, problems can occur anywhere in a network,
and with the dis-aggregated model, customers lose the ability to troubleshoot across their
entire network. And hardware matters when it comes to scale and performance. Thats
another huge limitation of the software-only model.
In today's traditional servers, the ratios of CPU to memory to I/O are mostly
unchangeable. With disaggregated servers, those systems are separated into
discrete pools of resources that are mixed and matched to create differently sized
and shaped systems. Data center architects can then "compose" systems through an
orchestration interface that are CPU-, memory- or I/O-intensive, depending on
workload demands, and then tear them down to recreate another system with a new
profile.
The push for disaggregated servers could trickle down to enterprises andhigh
performance computing (HPC) environments, if the economics are right, said Kuba
Stolarski, IDC research manager for server, virtualization and workloads.
"The standard justification for disaggregated systems is the refresh or maintenance
story," Stolarski said. "Today, if the processor is due for a refresh, then the whole
box is going to go," even if the other components are still viable.
Everyone who spends a lot on servers cares about saving money, not just
hyperscale folk, said Todd Brannon, Cisco director for product marketing for
its Unified Computing product line. The CPU represents about two-thirds of the
total system cost, and the rest (memory, I/O subsystem) typically doesn't need
replacing as often as the processor. In a disaggregated system, "just replace the
processing cartridge and maintain the investment in everything else," Brannon
said.
But plenty of things could prevent disaggregated systems from excelling:
Fundamental laws of physics and engineering, a failure to get the required
technology costs down and management complexity.
distances, but an expensive interconnect technology will eat into the potential
utilization benefits and derail the whole plan.
Systems designers might find that emerging Ethernet standards provide sufficient
performance and low enough latency to support disaggregation. Several systems
designed for hyperscale systems rely on 10GbE. When grouped with Quad Small
Form-factor Pluggable transceivers, Ethernet can go to 40Gb, said Kevin
Deierling, vice president of marketing at Mellanox, a supplier of high-performance
Ethernet and InfiniBand interconnect technology.
Meanwhile, work is underway on 25GbE, he said. That gets you to 100Gb -- the
speed of many InfiniBand fabrics required in HPC environments.
Whatever happens, systems vendors aren't waiting for interconnect technologies to
be fully baked. HP's Moonshot chassis, for example, is currently equipped with
several fabrics that will connect future disaggregated system components, said Dr.
Bradicich. One Ethernet fabric is used today, the second "proximal array" fabric
was revealed as part of the 64-bit ARM-based m400 servers announcement and the
third, as-of-yet unutilized fabric called the 2D Torus Mesh, allows any one
Moonshot component to communicate directly with any of its neighbors, Bradicich
explained.
"The highways are laid down and paved. There are just no cars on it yet."
internally for public consumption. "They're saying 'hey, we've already done the
heavy lifting. Let's bring this to the enterprise.'"
Cisco announced the UCS M-Series Modular Servers, the first example of a
commercially available disaggregated system, said Brannon. The front of the 2U
M-Series chassis holds eight processing cartridges that contain CPU and memory
--connected to disaggregated storage and connectivity resources via Cisco's Virtual
Interface Card.
Ericsson Radio
With the introduction of the Ericsson Radio System we are also introducing the
industry's most energy efficient and compact radio solution, maintaining
performance leadership at half the size and weight known as the Radio 2217.
This is the smallest and lightest macro coverage radio in the new portfolio with a
volume of approximately 10 liters and a weight of 12 kg. It is designed to support
LTE and WCDMA and Multi-standard mixed mode with up to 40 MHz of LTE and
up to 6 carriers of WCDMA, all within a 40 MHz of instantaneous bandwidth
(IBW).
With 4-way Rx diversity becoming increasingly more important to improve the
Uplink performance in many systems, the new 2Rx unit Radio 0208 provides an
easy and flexible way of equipping sites with HW supporting this functionality.
Radio 0208 has small form factor or 6 liters and an weight of 8kg and integrates
well with any other radio to constitute a 4Rx/2Tx system.
Another new addition to the Ericsson Radio System portfolio is the smallest, most
powerful outdoor microcell on the market, the Radio 2203. Its sleek, minimalistic
Scandinavian design complements any environment.
The radio equipment used for the RBS 6000 family supports multi standard
operation, which means it can operate on any of the standards, either in single
mode (one standard at a time) or in mixed mode (simultaneous operation on
more than one standard).
The radio equipment can be of two types:
The RRUs are designed to be installed close to the antennas, and can be either
wall or pole mounted. There are different versions of RRUs that support Macro or
Micro configurations.
In the AIR units on the other hand, the radio unit and the antenna are combined
into one single unit and installed in the usual antenna location.
The radio equipment is multi standard capable, which means that the different
units can operate on all standards GSM, WCDMA and LTE. Two standards can
operate simultaneously.
In India M2M market has now started evolving India M2M market will be majorly driven
by Automotive & Commercial Telematics, Household Monitoring & Control, Financial
Services & Retail, Smart Homes & Smart Metering, Manufacturing, Transportation and
Logistics.
In India market Automotive, Transport & Logistics industries lead in terms of M2M
adoption. However, Utilities will drive future market growth as the Government of India is
increasingly taking serious initiatives to deploy smart energy meters to address the
concern of increasing power theft and round-the-clock monitoring of power supply.
Being Indias First Convention on M2M Now into successful 4th Edition - The
forum will offer a world class platform for understanding business value proposition of
M2M and how M2M technologies and applications will help businesses to fuel growth and
productivity in India market.
To further improve the HSPA performance a scheme utilising two HSDPA carriers to increase the
peak data rates has been made available. The scheme known under a variety of names and
acronyms - DC-HSPA, Dual carrier HSPA, Dual Cell HSPA, and DC-HSDPA, Dual cell HSDPA,
also better utilises the available resources by multiplexing carriers in the CELL DCH state.
DC-HSPA or DC-HSDPA enables better utilisation of the resources, especially under poor
channel conditions where signal to noise ratios may not be as high as normally needed for high
data rate links.
HS-DPCCH: While it would have been possible to utilise two HS-DPCCHs, one on each
carrier, only one is used - the feedback information being mapped to the single channel.
There are either 5 or 10 CQI - channel Quality Indicator bits that are used. Five are used
when only one channel is utilised, and ten when two are in use. The compound CQI is
made up from two independent CQIs: one for each channel. New channel coding
schemes are defined for the overall HARQ feedback format.
HS-SCCH: The HS-SCCH is transmitted on both the anchor, or primary carrier as well
as the supplementary one, and the UE has to monitor up to four HS-SCCH codes on
each carrier. However the UE is only required to be able to receive up to one HS-SCCH
on the serving or main cell and one HS-SCCH on the secondary cell.
UE
CATEGORY
3GPP
RELEASE
MAX NO OF
HS-DSCH
CODES
MODULATIO
N
MAXIMUM RAW
DATA RATE
COMMENTS
21
Rel 8
15
16-QAM
23.4
DC-HSDPA
22
Rel 8
15
16-QAM
28.0
DC-HSDPA
23
Rel 8
15
64 QAM
35.3
DC-HSDPA
24
Rel 8
15
64 QAM
42.2
DC-HSDPA
25
Rel 9
15
16 QAM
46.7
DC-HSDPA +
MIMO
26
Rel 9
15
16 QAM
55.9
DC-HSDPA +
MIMO
27
Rel 9
15
64 QAM
70.6
DC-HSDPA +
MIMO
28
Rel 9
15
64 QAM
84.4
DC-HSDPA +
MIMO
DC-HSUPA
Higher performance
for the uplink.
The popularity of smartphones has created significant demand
on uplink network capacity. Uplink traffic consists of user traffic
and application signaling. A prime example of uplink user traffic
is picture sharing, which generates significant uplink network
load particularly during special events, such as professional
Carrier aggregation
across spectrum
bands.
Dual Cell HSDPA (DC-HSPA) aggregates two adjacent carriers
to offer a higher peak data rate and improve capacity and user
experience for bursty smartphone data traffic. Dual Band Dual
Cell HSDPA (DBDC-HSDPA) further enables aggregation
across two frequency bands. The following diagram illustrates
the DBDC-HSDPA concept.
Carrier 2
Carrier 1
Carrier 2
Improving capacity
and performance by
aggregating downlink
carriers.
3C/4C HSDPA
Significantly increases peak data
rate compared to HSDPA.
Three Carrier HSDPA (3C-HSDPA)
The use of smartphone applications requiring greater
bandwidths becoming more prevalent, putting significant strains
on cellular network capacity and user experience. 3C-HSDPA is
a downlink carrier aggregation technique for addressing this
challenge by leveraging the characteristic of smartphone traffic.