Sie sind auf Seite 1von 52

Compliments of

ition
Nexenta Special Ed
Find the advantages
to using SDDC
Open the book  
and find:
So f t wa r e D e f i n e d
Integrate off-the-shelf
technologies and virtualization
• What you need to  
know about Software
Defined Data Centers Data(SCDDeCn ters
to deliver unified storage, • If Software Defined
)
networking, and data center Storage is right for you
capabilities without locking
• How an SDDC lowers
in to proprietary hardware. TCO and improves
•  Discover SDDC — what it is and scalability
what it does • Why atomic storage
 witch to Software Defined
•  S is important
Storage — lower TCO and
improve scalability and
performance
•  Understand storage — optimize
the right underlying hardware Learn:
• What an SDDC is and what  
•  Get the right storage support —
it can do for you
see the solutions that support
your business requirements • How to move to Software Defined
Storage and software defined
networking

ISBN: 978-1-118-90356-8 Brian Underdahl


Not for resale Robert Novak
Nexenta is the global leader in Software Defined Storage (SDS).
Nexenta delivers secure, easily-managed, highly-available,
reliable, and scalable storage software solutions via ultra-low
TCO. Nexenta solutions are hardware-, protocol-, workload-, and
app-agnostic, providing innovation freedom for organizations
to realize true benefits of cloud computing via virtualization-
enabled Software Defined Data Centers (SDDC). Nexenta
enables workloads from rich media-driven social media to
mobility; from Internet of Things to Big Data. Built on an
open source platform, Nexenta delivers software-only unified
storage management solutions with a global channel
partner network.

At Nexenta, we provide the SMARTS for your storage needs:

Security
Manageability
Availability
Reliability
Lower TCO
Scalability

For more information, visit nexenta.com.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Software Defined
Data Centers

Nexenta Special Edition

by Brian Underdahl and


Robert Novak

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Software Defined Data Centers For Dummies®, Nexenta Special Edition
Published by: John Wiley & Sons, Inc., 111 River St., Hoboken, NJ 07030-5774,
www.wiley.com
Copyright © 2014 by John Wiley & Sons, Inc.
No part of this publication may be reproduced, stored in a retrieval system or trans-
mitted in any form or by any means, electronic, mechanical, photocopying, record-
ing, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976
United States Copyright Act, without the prior written permission of the Publisher.
Requests to the Publisher for permission should be addressed to the Permissions
Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201)
­748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permissions.
Trademarks: Wiley, the Wiley logo, For Dummies, the Dummies Man logo, A Reference
for the Rest of Us!, The Dummies Way, Dummies.com, Making Everything Easier, and
related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc.
and/or its affiliates in the United States and other countries, and may not be used with-
out written permission. Nexenta and the Nexenta logo are registered trademarks of
Nexenta. All other trademarks are the property of their respective owners. John Wiley
& Sons, Inc., is not associated with any product or vendor mentioned in this book.
LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO
REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF
THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WARRANTIES, INCLUDING
WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE. NO WARRANTY
MAY BE CREATED OR EXTENDED BY SALES OR PROMOTIONAL MATERIALS. THE ADVICE AND
STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY SITUATION. THIS WORK IS
SOLD WITH THE UNDERSTANDING THAT THE PUBLISHER IS NOT ENGAGED IN RENDERING
LEGAL, ACCOUNTING, OR OTHER PROFESSIONAL SERVICES. IF PROFESSIONAL ASSISTANCE IS
REQUIRED, THE SERVICES OF A COMPETENT PROFESSIONAL PERSON SHOULD BE SOUGHT.
NEITHER THE PUBLISHER NOR THE AUTHOR SHALL BE LIABLE FOR DAMAGES ARISING HERE-
FROM. THE FACT THAT AN ORGANIZATION OR WEBSITE IS REFERRED TO IN THIS WORK AS A
CITATION AND/OR A POTENTIAL SOURCE OF FURTHER INFORMATION DOES NOT MEAN THAT
THE AUTHOR OR THE PUBLISHER ENDORSES THE INFORMATION THE ORGANIZATION OR
WEBSITE MAY PROVIDE OR RECOMMENDATIONS IT MAY MAKE. FURTHER, READERS SHOULD
BE AWARE THAT INTERNET WEBSITES LISTED IN THIS WORK MAY HAVE CHANGED OR DISAP-
PEARED BETWEEN WHEN THIS WORK WAS WRITTEN AND WHEN IT IS READ.

For general information on our other products and services, or how to create a
custom For Dummies book for your business or organization, please contact
our Business Development Department in the U.S. at 877-409-4177, contact
info@dummies.biz, or visit www.wiley.com/go/custompub. For information
about licensing the For Dummies brand for products or services, contact
BrandedRights&Licenses@Wiley.com.
ISBN: 978-1-118-90356-8 (pbk); ISBN: 978-1-118-90453-4 (ebk)
Manufactured in the United States of America
10 9 8 7 6 5 4 3 2 1

Publisher’s Acknowledgments
Some of the people who helped bring this book to market include the following:

Project Editor: Carrie A. Johnson Business Development Representative:


Acquisitions Editor: Kyle Looper Karen Hattan

Editorial Manager: Rev Mengle Custom Publishing Project Specialist:


Michael Sullivan
Production Coordinator: Melissa Cossell
These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Table of Contents
Introduction................................................................... 1
About This Book................................................................... 1
Icons Used in This Book...................................................... 2
Beyond the Book.................................................................. 2
Chapter 1: Introducing Software Defined
Data Centers.................................................................. 3
Understanding SDDCs.......................................................... 3
Seeing the Role of Virtualization........................................ 5
Including Cloud Computing................................................ 7
Off-site storage and computing.................................... 8
Excess capacity for peak workloads............................ 8
Excess storage for growth spurts................................ 9
Chapter 2: Switching to Software Defined
Networking.................................................................. 11
Introducing Virtual LANs.................................................. 11
Understanding the OpenFlow Protocol........................... 13
Seeing How Open vSwitch Fits......................................... 14
Chapter 3: Understanding Storage.......................... 15
Looking at the Types of Storage....................................... 15
Block storage................................................................ 15
Storage area networks................................................. 17
Network attached storage........................................... 17
Object storage.............................................................. 17
SAS/SATA attached storage........................................ 19
Ethernet attached storage........................................... 19
Changes in Underlying Hardware.................................... 20
SSDs................................................................................ 20
SMR................................................................................ 21
Key/value....................................................................... 21
These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
iv
RAID Controllers................................................................ 22
Traditional software RAID........................................... 22
Raid-Z technology......................................................... 22
Chapter 4: Moving to Software Defined Storage....25
Considering Existing Challenges...................................... 26
Gaining Advantage with SDS............................................. 26
Organic growth when you need it.............................. 26
Growth that’s as virtualized as your VLAN............... 27
Strong APIs.................................................................... 28
Understanding What SDS Provides.................................. 29
Flexible Storage Solutions................................................. 31
Chapter 5: Getting the Right Software.................... 33
Supporting Different APIs.................................................. 33
Supporting the Protocols Your Stakeholders Need....... 35
Making Sure Your Hardware is Supported..................... 36
Chapter 6: Ten Questions to Answer
about Your Data Center............................................. 39
Are You Using Proprietary Hardware/Software?........... 39
Are You Locked in to a Single Vendor?........................... 40
Can You Add Capacity Without Shutting Down
the System?...................................................................... 40
Can You Unify the Administration Into a Single
Pane of Glass?.................................................................. 41
Can You Add Performance Without Adding Capacity?..... 41
Are You Ready For Objects?............................................. 41
Can You Interoperate with Legacy Devices?.................. 42
Is your Storage and Networking Technology
Backed by Strong Intellectual Property?...................... 42
How Often Do You Need to Upgrade Your Storage
Capacity?.......................................................................... 43
Why is Software Defined Storage not Virtualization?....43
What are the Six Tenets of World-Class Storage?.......... 43

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Introduction

O riginally, data centers were simply a place to


house a bunch of computers. Recent advances in
both hardware and software have had profound effects
on the way data centers are designed and used. The
data center has become a unified group of hardware
tied together by software-defined architectures.
The two largest drivers of the data center transforma-
tion have been the move to Software Defined Storage
(SDS) and software defined networking (SDN) —
technologies that have been enabled by server
virtualization.
Original data centers would house all the plumbing for
a single computer (literally water-cooled plumbing!).
Today, data centers not only collect all the computers
in one place, but also through the collections of com-
puter elements, they work around the mismatches in
the pace of development of various parts of computer
systems.

About This Book


Software Defined Data Centers For Dummies, Nexenta
Special Edition, shows you how these new technologies
can work for you as you attempt to unify your systems
into a consistent and well-formed architecture. In this
book, you discover how to deploy solutions that support
desktops, Big Data, web hosting, search engines, and
massive amounts of storage as a cohesive system that
serves your organization well.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
2

Icons Used in This Book


This book uses the following icons to call your a
­ ttention
to information you may find helpful in particular ways.

The information marked by this icon is impor-


tant and worth remembering for future use.

This icon points out extra-helpful information.

This icon marks places where technical ­matters,


such as jargon and whatnot, are discussed.
Sorry, it can’t be helped, but it’s intended to be
helpful.
Paragraphs marked with the Warning icon call
attention to common pitfalls that you may
encounter.

Beyond the Book


This book can help you discover more about Software
Defined Data Centers (SDDC), but if you want resources
beyond what this book offers, we recommend visiting
www.nexenta.com/SDDCForDummies.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 1
Introducing Software Defined
Data Centers
In This Chapter
▶ Understanding the basics of Software Defined Data
Centers
▶ Understanding the importance of virtualization
▶ Adding cloud computing into the mix

T here have been remarkable changes in the comput-


ing universe during the past few years. A number of
important developments have brought many powerful
services into everyday use. This chapter looks at how
these changes have redefined the Software Defined
Data Center (SDDC).

Understanding SDDCs
Originally, data centers housed all of the plumbing for
water cooling a single large mainframe computer. As
computers shrank in size, the data center transformed.
It became a centralized place to house all the special-
ized electrical circuits, network cabling, and cooling

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
4
necessary for maintaining a controlled environment for
the computers. A data center occupied a rather large
physical space, consumed a lot of energy, and required
a full-time, dedicated staff of expensive talent to keep
everything functioning properly.
The SDDC is quite a different thing than the traditional
data center. Instead of being a physical structure, the
SDDC is offered as a service. That is, you have some-
thing that provides all the data center’s functions but
with baggage that’s easier to manage.
One way to visualize how SDDCs differ from tra-
ditional data centers is to consider how most
people watch movies at home. In the past, you
drove to the video store and rented a tape or
DVD that you took home to play. Today, you
don’t have to drive to the video store; instead,
you can watch movies through streaming ser-
vices, such as Netflix. You still can watch what
you want, when you want, but you don’t have
to bother with a physical tape or disc. To take
the analogy one step further, you probably
have no idea where the video stream is coming
from, either. You simply watch movies as a
­service provided by someone else.
Early data centers existed to house computers
for specialized computations. These evolved
into specialized machines used for specialized
circumstances. There were machines whose
hardware design reflected their computing
purpose. A database management system
(DBMS) server would be different from a com-
putational server. A storage server for a net-
work file system (NFS) would be different from

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
5
a storage area network (SAN) storage server. A
Hadoop nameserver would be different from a
Hadoop datanode. These designs would change
over time. As the number of cores per CPU
socket increased, so did the data management
requirements. Hadoop users demanded more
drives as the number of CPU cores increased.
Some data centers demanded more and more density
for their storage to maximize their use of floor space.
Others mandated an upper limit on the storage contained
in a single server or chassis (for example, 20 terabytes)
to limit the blast radius (aka how widespread the collat-
eral damage affects users) when a storage server would
fail. Others insisted that their computer systems would
consume less power (and generate less heat) so more of
the power budget could be dedicated to computing and
less to their addiction to CRAC (central room air condi-
tioning), which had traditionally consumed 50 percent of
a data center’s power.

Seeing the Role of 


Virtualization
SDDCs are possible because of virtualization — a process
of simulating physical computers and devices through
software. Virtualization defines virtual machines that are
assigned specified amounts of computing power in the
form of memory, CPU services, and storage.
Before the widespread use of virtualization,
grid computing was a method of assigning com-
puting tasks to one or more physical comput-
ers that existed within a data center. If a task
required 100 computers, you had to dedicate

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
6
100 physical machines to the task. With virtu-
alization, a single physical machine can supply
many of the virtual machines.
Virtualization saves money by using all of a
computer’s resources. You add virtual users
until the capacity of the machine is fully
­utilized. The cost of the machine is recouped
across many users, not just a single user. In
this way the machines aren’t left idle, which
would waste resources; instead, the machine’s
use is maximized. As a result your in-house
data center can purchase fewer machines for
more users.
Although virtualization is a critical part of SDDCs, the
concept of virtualization isn’t something new. Indeed,
virtualization has existed since it was first developed
by a couple of engineers 50 years ago. Today, however,
virtualization provides considerable additional func-
tionality compared to when it was originally envi-
sioned in the 1960s. Back then, virtualization was seen
simply as a means to allow many different users to run
multiple applications at the same time on the same
mainframe computer. Today, virtualization enables
computers to provide many different services simulta-
neously in a far more efficient manner than would be
possible otherwise.
Virtualization improves availability because a
user is no longer tied to a single machine. If
that machine has hardware issues, the user
is unable to do work until the machine is
repaired. If the user is using a virtual instance
of a machine, his application can run on any of
the machines that can run the virtual image.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
7

Tipping the balance


Over the last ten years, according to industry estimates and
trends, a dramatic difference has existed in the advances in
performance of the different parts of a computer system.
These changes include the following:
✓ CPU speeds have increased by 8 to 9 times
✓ Memory (DRAM) speeds have increased by 7 to 9 times
✓ Internal computer bus speeds increased by 20 times
✓ Networking speeds increased 100 times
✓ Hard disk drive speeds increased by 1.2 times
✓ Flash SSDs increased by 50 times
Due to the changes in performance and pricing, the way data
centers are organized is also rapidly changing, helped by vir-
tualization. More and more networking is used to connect
computers to many more storage devices than ever before.
The storage internal to the computer systems is migrating to
flash drives and is used for caching (check out Chapter 3 for
more info on caching).

Including Cloud Computing


Cloud computing merges grid computing and virtualiza-
tion to further promote the efficient use of physical
resources.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
8
Two key elements of SDDCs are as follows:

✓ Software Defined Storage (SDS) is a method of


­virtualization applied to data storage. Effectively,
this virtualization of data storage enables storage
to be provided as a service instead of as a
­physical entity.
✓ Software defined networking (SDN) represents a
­virtualization of networking services that enables
network administrators to provide and administer
network services through an abstraction of lower-
level services. Basically, an abstraction in this
­context means that only the relevant information is
presented to the network administrator rather than
full access to a bunch of mostly unimportant details.

Off-site storage and computing


One of the advantages of virtualized storage and com-
puting is that when insufficient resources are available,
the user’s request can be fulfilled by a virtual machine
that’s located at an off-site facility. These services are
usually offered by cloud hosting providers, including
companies such as Amazon, Yahoo!, eBay, and Google.
Because the off-site storage and computing capacity
uses the same application program interfaces (APIs),
the system administrators can easily redirect specific
tasks to an outside provider.

Excess capacity for peak workloads


The ability to offload tasks to a cloud service provider
means that when specific peak workloads exist (end of
fiscal month, end of fiscal quarter, back-to-school shop-
ping time, holiday shopping time, and so on), the extra

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
9
tasks can be migrated to off-site computers to handle
the extra storage capacity or workload, saving the
expense of having to buy extra computers for a few
weeks each year.

Excess storage for growth spurts


In addition to seasonal demands, corporations often go
through growth spurts where the capacity on internal
machines is constrained. The constraint may be adequate
power until the utility company can bring in additional
power lines, air conditioning until additional units are
installed on the roof of the building, or as simple as
­adequate floor space to hold the additional computers
until additional facilities can be built or leased. In each of
these cases, if the company needs additional capacity,
they can lease additional computer and storage facilities
from a cloud source provider.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
10

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 2
Switching to Software Defined
Networking
In This Chapter
▶ Understanding virtual LANs
▶ Looking at OpenFlow
▶ Considering Open vSwitch

S oftware defined networking (SDN) is one of the


key elements that enables the implementation of
Software Defined Data Centers (SDDC). Choosing the
right SDN APIs (application program interfaces) and
packages leads to the proper definition of your network
layer to provide the network services to match the
­disparity between the compute speed and the storage
speed. This chapter takes a closer look at how SDN
functions.

Introducing Virtual LANs


The local area network (LAN) is one of the basic struc-
tures of computer networking. A LAN consists of the
various computers, switches, and cabling or wireless

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
12
connections that make up a group of computers that
can communicate and share files locally. Typically,
LANs also include a router to enable communications
to pass data outside the local group, for example, to
the Internet. Traffic outside the router is separated
from local traffic, which prevents local traffic from
being viewed or intercepted on the public network.
Sometimes, however, it’s simply not possible to physi-
cally connect all the computers that need access to the
LAN directly to the local network. For example, an orga-
nization may have offices in different locations that all
need to access the accounting department’s files or the
inventory database. Obviously, you wouldn’t want to
simply make those resources available on a public net-
work such as the Internet, so another method of con-
necting to those files is required. This alternative
method is the virtual LAN (VLAN).
A VLAN consists of multiple LANs or LAN clients that
can communicate privately across a public network
without making that communications traffic public or
visible. Essentially, VLANs function as if each segment
(or domain) of the VLAN is directly connected to all
other segments of the VLAN with a dedicated cable.
SDDCs typically encompass multiple virtual servers
that may be provided by a number of different physical
systems located in various places. A VLAN enables
these virtual servers to be logically grouped together
no matter where the actual physical computers are
located. This private, logical grouping enhances scal-
ability, security, and network management issues to
make the SDDC possible.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
13
VLANs have been around since the early 1980s,
and were originally developed to enhance voice
over IP (VoIP) networking services.
VLANs make cloud-based services, such as
SDDCs, far more secure than they would be if
they were visible on the public Internet because
the VLAN keeps the traffic private. There can be
many different VLANs that can overlay one or
more physical LANs. Each VLAN can represent a
different department’s logical network.

Understanding the
OpenFlow Protocol
OpenFlow is a standardized communications protocol
that enables control of network switches and routers
over the network. In many ways, OpenFlow has become
synonymous with SDN. Although SDN has been in exis-
tence for over 30 years with the advent of VLANs, there
has been no capability to centrally control the network.
Through the use of OpenFlow, you can manage the flow
of traffic through switches and manage the rerouting of
traffic.
OpenFlow provides a set of software and pro-
tocols that spans a wide variety of switches
and routers and even newer, simpler flow con-
trol devices. By using the OpenFlow network,
administrators can design flexible network
paradigms that respond to the changing world
of bring your own device (BYOD).

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
14
BYOD is a company policy that permits
employees to bring their personally owned
mobile devices (laptops, tablets, and smart-
phones) to their workplaces and to use those
devices to access privileged company
­information and applications.

Seeing How Open vSwitch Fits


Open vSwitch is an open source switching software
layer that provides a switching stack for hardware
virtualization environments. Before Open vSwitch, this
function was provided by an embedded virtual switch
within a virtual machine manager (called a hypervisor),
like kernel-based VM (KVM) and VMware.
Because Open vSwitch is a virtual implementa-
tion, it can support a single logical switch that
may span multiple physical devices.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 3
Understanding Storage
In This Chapter
▶ Considering the different types of storage
▶ Seeing how hardware comes into play
▶ Understanding RAID controllers

Y ou can’t have Software Defined Storage (SDS) with-


out some actual physical storage existing some-
place. Understanding how storage is used and some of
the physical aspects of storage devices are important
elements in having a complete grasp of what’s involved
in moving to SDS. In this chapter, you look at the differ-
ent kinds of storage. For more on SDS, see Chapter 4.

Looking at the Types of Storage


Over the years, many different types of storage have
been developed. In this section, you take a look at
some of the important milestones along the way.

Block storage
One of the side effects of the historical Hollerith cards
that were used by the U.S. Census bureau to tabulate
population data was the early adoption of fixed size

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
16
records. Early computers adopted Hollerith cards
(see Figure 3-1) for fixed size records of 80 characters
per record. The thinking behind these fixed records
was later incorporated in early storage systems (drums
and disk drives) so data would be encoded in fixed
sized blocks. The blocks on a storage device were typi-
cally encoded as blocks whose size was a power of 2
and often a multiple of 512 bytes (or characters).

Figure 3-1: An example of Hollerith cards.

Over the years complex encoding schemes were used


to manage the data on the storage device to track the
blocks that were in use and the blocks that were avail-
able. Although the blocks are of fixed size, actual files
rarely line up so neatly to fill an exact number of blocks.
File systems were developed that would track how to
find the blocks that belonged to a file and then keep
track of not only how many blocks but also the total
number of bytes in the file (because some portion of
the last block would remain unused).

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
17

Storage area networks


Storage area networks (SANs) were developed because
the data bus used internal to a computer system couldn’t
extend far enough to support the large number of storage
devices that were required to hold a user’s data. In addi-
tion, a single computer often needed to access more data
than could fit inside a single computer, and it needed to
share that data with other computers. By moving all the
storage devices to a separate network attachment, not
only could many more drives be accessed by each com-
puter but also the computers could share the drives.

Network attached storage


With the introduction of the network file system (NFS)
by Sun Microsystems in the early 1980s, files that for-
merly were tied to a specific disk drive (a device
attached to a computer and assigned a drive number)
became files that were in the hierarchy of files affiliated
with a network location. Each computer became a part
of the hierarchy of files, and users no longer cared
where their files were found.

Object storage
In the 1990s the programming models changed. The
models no longer looked at data as bits, bytes, and
words of information but as ­logical collections of infor-
mation called objects. These objects in turn weren’t
accessed by addressing the bytes and words but by
using methods (programs) to perform operations on
the objects, like creating an object, modifying the
­contents of an object, and destroying an object. These
became important for the later evolution of NoSQL
databases and key/value drives as programming moved
from a model of records with fields to less formally
structured objects and ­key-value models like XML.
These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
18
These forms of data representation became ideal for
unstructured data, virtual machine images, photos,
videos, emails, and logs of usage such as electricity,
telephone, water, and other utility data.

Why do you need atomic storage?


Not all storage is considered stable or atomic storage. It
doesn’t guarantee that the writes it sees are the most recent
writes made.
Most file systems perform their operations in multiple steps.
Storage is allocated for the file, data is written to the file, and
then the file system is updated. If the sequence is interrupted
at the wrong time (such as during a power failure), the file
system is left in an inconsistent state. After restart, the
system must go through a long process of checking the con-
sistency after rebooting and before making the file system
available to users. This consistency checking program is
known by different names on different systems (for example,
chkdsk or fsck). Some file systems attempt to fix these prob-
lems by borrowing a technique from relational database sys-
tems, known as transaction logging. Although transaction
logging improves the recovery time, the process is still
time-consuming.
NexentaStor’s file system is based on the model of atomic
operations. Data blocks in files can be written by application
programs, but until the final step of updating the super blocks
in an atomic (uninterruptable) operation, all those blocks are
candidates for garbage collection by a lazy background pro-
cess that returns those blocks to the pool of free or available
resources. With this innovation, the file system always moves
from one consistent state to another by atomic operations.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
19

SAS/SATA attached storage


The Small Computer System Interface (SCSI) was origi-
nally a bus used by small computers to access storage
inexpensively. However, over time the SCSI quickly
grew in speed and capability to provide high speed
transfer to and from the computers.
At the same time, high speed serial busses like IEEE
1394 and USB had been developed. As a result of this
development, the Serial Attached SCSI (SAS) and Serial
Advanced Technology Attachment (SATA) standards
were developed to reduce the complexity of the wiring
and cabling between computers and peripheral storage
devices. Unfortunately, these protocols are still limited
to extremely short runs and have complexity chal-
lenges of their own.

Ethernet attached storage


Recently, several large storage manufacturers began
introducing storage devices attached to computers
over Ethernet connections rather than SAS/SATA. The
advantage is that these devices can be managed by
­relatively inexpensive off-the-shelf Ethernet switches
rather than the much more expensive SAS/SATA
switches.
Even more importantly, storage devices are no longer
internally held captive to the storage server c­ omputers.
Storage devices can be inexpensively shared among
computer systems. A failure of the storage server
doesn’t mean that all the storage is now lost to the
­cluster. The storage chassis for the Ethernet-attached
­storage is much simpler and more reliable than a
­computer system.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
20

Changes in Underlying
Hardware
Hard disk drives (HDDs) have traditionally supported a
random access model where each individual block of
the drive can be read or written separately from all
other blocks. Because the device write head uses mag-
netic fields to read/write these blocks, the blocks must
be separated by large gaps to avoid having the opera-
tions on one block affect the data in another block.
Advances in the way that data is recorded on these
mechanical spinning disks have allowed the amount of
data stored to increase rapidly for many years.

SSDs
Solid state drives (SSDs) are a modern replacement for
hard drives and are based on semiconductor memory
known as flash memory. Due to the expense of these
SSDs when compared with HDDs, they were initially
used only for caching data closer to the computer
system. However, as prices have dropped, they’ve
begun to displace HDDs due to their lower total cost of
ownership (TCO) due to lower overall failure rates and
lower power consumption. There are no moving parts
in an SSD. SSD comes in two flavors: MLC (100,000
rewrites) and SLC (1,000,000 rewrites).
Because both types of flash memory are known to fail at
unpredictable points, most flash memory is overprovi-
sioned with excess capacity. When a portion of the device
begins to fail, the data in that location is remapped to an
alternate location. When you purchase a 600 GB drive, it
may actually contain as much as 900 GB of capacity in
order to ensure that over the working life of the drive the
600 GB capacity will be ­available for use.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
21
Caching is a means of storing information in a
high-speed location so that the most frequently
and/or most recently accessed information is
available without having to retrieve it from the
slower device. Storage caching typically uses
solid-state memory technology.

SMR
Shingled magnetic recording (SMR) is one of the newest
technologies contributing to the density of the data
placed on a disk drive. SMR increases the density of
storage and can lower the cost per bit of storage by 20
to 50 percent. This technology can make the cost of
triple redundant object storage palatable to a corpo-
rate customer. The triple redundancy is actually possi-
ble to deploy at a significantly lower cost than current
mirrored or RAID storage solutions (see the later sec-
tion “RAID Controllers” for more on RAID).
SMR devices can’t be used with existing file
systems without a major overhaul, but they’re
perfect for copy on write technologies used by
NexentaStor as well as for key/value storage
devices.

Key/value
One class of storage devices that potentially can
replace all the block-centric storage devices that pre-
cede them are key/value drives. These drives represent
the convergence of transactional database systems,
XML key/value representations of data, and object
­oriented programming.
The key/value drive is built around a very simple
­premise. You present a binary large object (BLOB or
the value) and the key you want to use to retrieve that

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
22
value. When you want to retrieve the value, you simply
present the device with the correct key, and the value
is returned to you. The world outside of the device has
no information about the location of the value on the
device and doesn’t care how the device manages the
information.

RAID Controllers
A redundant array of inexpensive disks (RAID) is a way
of ensuring that the data stored on devices is pre-
served even if one of the drives should fail. RAID uses
special forms of encoding the information so if one
member of RAID is lost, excess bits in the other devices
can be used to reconstruct the information.

Traditional software RAID


Traditional software RAID uses the central processor to
perform all the calculations of the checksum data nec-
essary to support RAID. However, if a failure occurred
in the middle of saving the data, software RAID could
lead to data corruption and was often rejected by
enterprises as a reliable form of data protection.

Raid-Z technology
The ZFS file system and volume manager technology
used in NexentaStor and the Illumos/Solaris operating
system fixed the problems associated with both software
and hardware RAID. By avoiding the read-write-read-write
two phase approach of classic RAID technology (which
leads to the “write hole” and silent data corruption even
with hardware RAID), RAID-Z incorporated the RAID
checksums into a single set of atomic writes that guaran-
teed not only the safety of the data but also the integrity
of the file system.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
23
Illumos is a fork of the original OpenSolaris
code that has been used by NexentaStor for its
open source operating system. As OpenSolaris
completed its final build and Oracle announced
waning support going forward, a community of
OpenSolaris users formed Illumos to create a
fork and continue development on a true open
source derivative. Illumos is free, open source,
and Unix-based with the following distinguish-
ing features:
✓ ZFS: A combined file system and logical volume
manager providing high data integrity for very
large storage volumes
✓ DTrace: A comprehensive dynamic tracking frame-
work for troubleshooting kernel and a
­ pplication
problems in real time
✓ Kernel-based VM (KVM): Supports native virtual-
ization on processors with hardware virtualization
extensions
For more information on Illumos, visit www.
illumos.org.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
24

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 4
Moving to Software Defined
Storage
In This Chapter
▶ Understanding existing system challenges
▶ Looking at the benefits of Software Defined Storage
▶ Getting ahead with SDS
▶ Looking at what SDS provides
▶ Understanding flexible storage solutions

S oftware Defined Data Centers (SDDC) are built


on Software Defined Storage (SDS). Unlike legacy
hardware systems, SDS systems provide the flexibil-
ity that you need in order to implement an SDDC
where storage is offered as a service. This chapter
discusses the challenges inherent in legacy systems,
lists a number of advantages that are offered by
­software defined systems, and looks at some of the
options available.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
26

Considering Existing Challenges


Legacy hardware storage systems present a
number of challenges that make them far less
than ideal for creating your software defined
data center. One of the challenges of the
legacy storage solutions has been their propri-
etary nature. These proprietary implementa-
tions are locked into closed systems that are
difficult and expensive to expand.
Each incremental piece of storage must bear the cost of
the storage component itself and of the legacy vendor’s
markup on the components. The capacity of upgrades
within a legacy system has a fixed limit. Instead of
adding more network switches and servers in 19” racks,
you’re limited to the capacity of the legacy storage
enclosure. In other words, you need to get out the fork-
lift and start doing major infrastructure upgrades when
you need to add new capacity to your legacy system.

Gaining Advantage with SDS


Compared to legacy systems, SDS is far easier and
more economical to upgrade. In this section, you take a
look at the factors that give SDS a great advantage over
legacy hardware systems.

Organic growth when you need it


With SDS you can grow your capacity along with your
needs. The step function for adding incremental capac-
ity is typically measured in thousands of dollars, not
hundreds of thousands of dollars. You still have to plan
for floor space, power, and cooling over time, but you
can even pre-install the network cabling, power cabling,

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
27
and physical rack infrastructure so you can literally
slide in new servers, JBODs, and switches/­routers as
you need them. In addition, by sticking with the right
application program interfaces (APIs), you’ll see addi-
tional benefits to sending your overflow capacity/
demand to outside service providers for your less fre-
quently accessed data.

Growth that’s as virtualized


as your VLAN
Different departments have changing needs and growth
in the amount of storage that they require and/or the
number of users demanding access to storage. With
SDS, your growth doesn’t have to follow a specific plan
of one rack for one department (or customer) and
another rack for another department. As your organiza-
tion changes, you can add members to the individual
VLAN networks and configure the switches to tag the
routes/ports/channels to carry traffic to specific
VLANs. Similarly, the storage can be added to racks as
needed, and the VLANs make sure that the new data is
available to the right users. See Chapter 1 for more
information on virtualization.
By dealing with abstracted hardware, the individual dif-
ferences of the underlying hardware are hidden from
the user and application. This is both an advantage and
a disadvantage. The VM model insulates the application
from the proprietary extensions that most manufactur-
ers add to their hardware. Although these proprietary
extensions can be useful and give the last small mea-
sure of additional performance, they have the effect of
locking the application and user to the hardware from
that particular manufacturer.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
28
The VM instance, on the other hand, provides a set of
hardware interfaces that is shared in common by
almost all platforms. If a particular platform is lacking
hardware for most common functions (for example,
64-bit arithmetic on a 32-bit computer) then the VM
hypervisor provides emulation software to implement
the missing functions.

Strong APIs
The ability to build great VM instances is based on
strong APIs that implement specific functions. For
example, writing to a disk drive is a process that’s
invisible to user applications. User applications are
unaware of how much space is available on a particular
disk drive. Similarly, they’re unaware of the underlying
hardware organization of the space on a disk drive. All
of these concerns are hidden behind APIs that provide
definitions of the appropriate system calls or service
routines that provide access to the capabilities to
read/write disks, draw windows on a screen, send mes-
sages to another application, and so on. In this section,
you take a look at a couple of important SDS APIs
What is Posix?
The first API is Portable Operating System Interface
(POSIX). The development of POSIX was strongly influ-
enced by the design of UNIX — the most popular porta-
ble operating system that successfully migrated to a
wide variety of hardware. UNIX provided a lot of
­capabilities that couldn’t be found in other portable
operating systems that were more focused on small or
embedded computers.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
29
What is Putget?
Putget represents the operations that are performed on
object storage systems. Although standards are still
evolving in this area, all object storage systems s­ upport
basic APIs to support three basic operations:
✓ PUT an object
✓ GET an object
✓ DELete an object

Comparing POSIX and Putget


What distinguishes Putget from POSIX is that the POSIX
APIs originally ran on a single computer or a closed
network of continuously connected computers. Putget,
like other REST APIs, is designed to rely instead on
atomic operations (meaning the entire operation is
completed and acknowledged). BYOD such as tablets
and smartphones have helped to drive the need for
atomic, stateless APIs.

Understanding What
SDS Provides
SDS provides a number of important benefits:
✓ Elimination of hardware dependencies: You’re
free to pick and choose from a variety of vendors
that provide a selection of components at differ-
ent prices.
✓ Ease of migration: When your data is no longer
locked in to a specific vendor, you gain not only
lower costs (choice of hardware) but also the flex-
ibility to migrate users and data to different loca-
tions for ease of access and sharing the data.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
30
✓ Separating the management plane from the data
plane: Traditional file systems tightly integrate
the management of the file system and data with
the allocation, layout, and configuration of the
devices. Don’t lock into a storage system that
doesn’t provide flexibility in the types of devices
or the types of management tools. If everything is
locked into a single infrastructure, you can’t adapt
emerging technologies.
✓ Single pane of glass management: This is a set of
tools for the allocation and management of data
that uses standard interfaces to let you see all your
computers from a single point of contact — no
­flipping screens between computer systems.
✓ Adopting a flexible connectivity model: With
SDS, you can adapt the connectivity model
(number of pipes, speed of pipes, protocol used
to access data, and the actual/physical location of
the data) to meet the applications needs with any
access method and advanced functionality.
✓ Using commercial off-the-shelf (COTS) platforms:
COTS employ standard interfaces and can be inte-
grated into a cohesive whole with an integrated
management plane on a single pane of glass that
operates independently of the data devices that
are added to the system over time.
✓ Automatic tiering: By using software that auto-
matically places data in the correct storage tier
(slow HDDs versus SSDs versus Ramdisk), you can
achieve the performance you need without buying
excessive storage.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
31
✓ Converged infrastructure: Computing, networking,
and storage are all freed from platform/­hardware
dependencies, and Big Data is available to all
users/applications that need it without getting bot-
tlenecked on a single proprietary resource.
When you combine this list of benefits, you
save money over proprietary and legacy
­systems. In addition, your organization can
ensure an orderly migration to standard APIs
that are adopted by the industry at large.

Flexible Storage Solutions


Through the use of SDDC you can integrate a variety of
new technologies into a single data center. You are no
longer behaving as a monolithic beast that can’t
change to meet the individual needs of departments
that require cutting-edge tools and technology, includ-
ing virtual machines like KVM and VMware:
✓ Application layer: This runs on top of the plat-
form infrastructure and is the engine that drives
flexible storage solutions like NexentaStor.
✓ Management layer: This is the universal adminis-
tration console for all assets.
✓ Solution stack layer: This provides the open inte-
gration APIs. Using open APIs like POSIX and
Putget keeps expensive proprietary solutions out
of your data center.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
32
Nexenta storage software provides SDS that’s
serving a wide variety of workloads and busi-
ness critical applications, working continu-
ously to better integrate the Nexenta products
into customer environments. Collaborative
R&D with technology partners as well as lead-
ing applications and integration-focused part-
ners for specific markets allow Nexenta to
offer solutions for the key workloads running
data centers. Check out Figure 4-1 for a look at
Nexenta’s software stack.

Figure 4-1: Nexenta’s software stack.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 5
Getting the Right Software
In This Chapter
▶ Understanding the interfaces that need support
▶ Making sure all necessary protocols are covered
▶ Supporting all the hardware

T o make sure that your Software Defined Data Center


(SDDC) project is successful, ensure that your
chosen solution provides the proper support for your
environment. This support includes having all the nec-
essary programmatic interfaces, covering the bases on
all the important protocols, and making sure that your
hardware will function properly, too. This chapter
touches on each of these areas.

Supporting Different APIs


Application program interfaces (APIs) are the ­mechanisms
used to access all the services provided by an applica-
tion. They’re the cornerstone of successful c­ ontrol and
management paradigms. When systems are built on
standard and well-defined APIs, there’s room for compe-
tition among different vendors of both hardware and
software to employ a variety of solutions that can knit
together seamlessly.
These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
34
User interfaces — the way that users interact with an
application — are the most important of the APIs that
you’ll commonly need to consider. SDDC applications
typically provide one or more of the following:
✓ Command line interfaces (CLI): These interfaces
provide system administrators with the tools that
allow them to scale from dozens of computer sys-
tems to thousands of computer systems. Well-
designed CLIs include consistent commands and
error messages that identify which commands fail
on which machines so successful operations can
be tracked, and the administrators can concen-
trate on failure cases.
✓ Graphical user interfaces (GUIs): These inter-
faces help administrators remember how to con-
trol obscure portions of the operations available
when controlling computer systems. However, the
strength of a GUI is also its fundamental weakness.
If there are thousands (or tens of thousands) of
systems in a data center, an administrator doesn’t
have the time to visit the console GUI of each
computer system, so most system administrators
prefer a combination of GUI and command line.
✓ HTML: HTML is independent of the underlying
native windowing systems. As a result, GUIs that
have been developed to work with HTML are
available for PCs, Macs, mainframes, tablets, and
even smartphones. HTML also provides a frame-
work for the atomic RESTful interfaces to network
applications.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
35
✓ Simple Networking Management Protocol
(SNMP): SNMP has been adopted for computer
systems as well. When a computer system reaches
a critical threshold such as CPU temperature, an
SNMP alert can be sent that allows an SNMP man-
ager to pop up a notification on the management
single pane of glass.

Supporting the Protocols Your


Stakeholders Need
A number of protocols are used to communicate
between different devices. You need to make sure that
all the required protocols are included in your SDDC
deployment. Several important protocols include
✓ Transmission Control Protocol/Internet Protocol
(TCP/IP): This is the oldest network layer still in
use today. Industry trends seem to point to the
continued availability of TCP/IP for a long time to
come. TCP/IP has evolved and changed from the
Aloha Net implementation over packet radio to
the 100 gigabyte per second networks that you
see today.
✓ Storage area network (SAN) and fiber channel
over Ethernet (FCoE): SAN separated storage
from computer systems. FCoE has emerged as the
lower-cost implementation of SAN. In their prime,
these protocols represented a performance boost
above the TCP/IP implementations of their time
but at a higher price point.
✓ InfiniBand: A high-speed networking protocol
that found early success in high performance
computing (HPC) environments. It has gone

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
36
through a migration of 10GB to 20GB to 40GB, but
it too is quickly being supplanted by TCP/IP.
InfiniBand provides a network protocol that imple-
ments a subset of TCP/IP, but it never fully imple-
mented all the TCP/IP capabilities.
✓ ATA over Ethernet (AOE): A method of virtualiza-
tion that allows applications written for Advanced
Technology Attachment (ATA) devices to function
over the Ethernet as if the devices were locally
attached to the computer.
✓ iSCSI: A protocol that allows an application to
treat a SCSI disk drive with block mode operations
to run successfully over the Ethernet.
These protocols are supported by excellent Software
Defined Storage (SDS) systems. In a true SDS environ-
ment, your applications that use such APIs can simply
plug in and take advantage of the underlying devices
without being aware that the world has changed
beneath their interfaces.

Making Sure Your Hardware


is Supported
Building your SDDC means that you first shop for the
SDS and software defined network (SDN) software
components that provide your applications and your
developers with the tools they need to get their work
done. Then you select the underlying commercial off-
the-shelf (COTS) hardware components that support
your SDS and SDN infrastructure. After that, you
select the admin­istrative tools that give you the max-
imum visibility into the operation of your hardware
and software components.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
37
These steps don’t have to be accomplished in one fell
swoop. You can migrate your existing data center into
an SDDC by gradually changing over and adding in one
subsystem at a time. Take a look at the types of hard-
ware you’ll likely to need to support your SDDC:
✓ X86 architectures: This architecture is a type of
processor that dates back to the Intel 8086 CPU,
but it’s more than defining the instructions that
execute on the CPU. This architecture also defines
the busses that are used to access other CPUs and
Memory (QPI), as well as the architecture of the
busses to access peripheral controllers (PCIe) and
low-level devices like heat sensors and fan speed
(I2C). The x86 architecture is strong and promises
to be around for some time to come.
✓ ARM architectures: Devices built around this
architecture (another type of CPU) initially were
implemented for embedded controllers to be used
within devices. However, the increasing speed and
performance of the ARM devices have made them
suitable for a variety of tasks that compete at the
low end of the x86 performance range. In some
cases the ARM processors have emerged for entry
level storage capabilities, recognizing that they
don’t have the performance necessary for mid-
range and high-end storage.
✓ Graphical processing units (GPUs): GPUs were
originally designed to accelerate the processing
required to update display information. Due to the
nature of the display, this acceleration involved
performing many computations in parallel. For
storage servers, it isn’t intuitively obvious why
GPUs would be necessary, but due to the parallel

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
38
nature of the many processors found in a GPU,
they’re excellent at performing three major ­com-
putational tasks that are used in storage:
• Compression of data (to reduce the size of data
on disk)
• Encryption (to protect data from unauthorized
access)
• Deduplication (to reuse data already found in
the storage system)
Tablets and smartphones have driven the
­creation of ARM processors with companion
GPUs. These system on a chip (SOC) implemen-
tations are relatively inexpensive because
they’re manufactured by the millions for tablets
and smartphones. As a result, an ARM ­processor
with a companion GPU is very low cost and
can be adapted to use as low-power ­storage
p­ rocessor components.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Chapter 6
Ten Questions to Answer
about Your Data Center
In This Chapter
▶ Considering questions about your environment
▶ Helping your transition from legacy environments

P lanning the transition to a Software Defined Data


Center (SDDC) is challenging. The ten (okay, 11)
sections in this chapter ask questions that help you
understand how to make the transition from your
legacy environment to an SDDC environment.

Are You Using Proprietary


Hardware/Software?
The real question may be how dependent are you on
proprietary hardware/software? If your license expires,
do you lose access to your data or is it held for ransom
by your storage or application vendor? It is unlikely
that all your hardware and software solutions are from
a single hardware and software vendor.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
40
Make sure opportunities exist for your busi-
ness to put a new project into a pilot SDDC
that incorporates elements of Software Defined
Networking (SDN) and Software Defined
Storage (SDS). You can start by migrating a
portion of the user data for a research project
from the proprietary platform to your new SDS
platform.

Are You Locked in to


a Single Vendor?
Is the same vendor providing both your hardware and
software needs? Are you constrained by your lease and
by the software itself? If it’s time for a major turnover in
the contract for proprietary hardware platform, per-
haps it’s time to look at alternative vendors. Consider
that there may be key functions (for example, SDS) that
you should be considering for replacement.

Can You Add Capacity Without


Shutting Down the System?
The answer to this question will reveal a great deal about
whether you’re locked in. In a true SDDC ­environment,
adding capacity should be as simple as plugging in
­additional disk drives to existing servers (without
­shutting down), or by adding additional ­storage servers.
Even if you have to shut down an individual server,
there are few excuses for losing continuity in the whole
system. Well-designed SDS and SDN allow you to
reroute traffic around a failed or shut-down server until
you’ve added additional memory or disk capacity.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
41

Can You Unify the


Administration Into a
Single Pane of Glass?
If your administrators need to learn a different control
paradigm for each of the data center systems, they’ll
probably be frustrated with the experience. Admin-
istrators not only have to learn the idiosyncrasies of
different user interfaces, but they face the daunting
task of visiting hundreds of control screens for hun-
dreds of servers. This is a labor-intensive, time-­
consuming, and costly task.
Software Appliances that support management
with Chef, Puppet, and SNMP are examples of
ways in which these administrative tasks can
be centralized and automated.

Can You Add Performance


Without Adding Capacity?
In many storage arrays, the only way to add additional
performance (IOPs) is to also add large amounts of
storage that add performance by striping large amounts
of data. More sophisticated systems can add perfor-
mance by providing high performance caches (like ZIL,
ARC, and L2ARC used by ZFS-based solutions like
NexentaStor). These caching solutions are not only
higher performance but also more cost effective.

Are You Ready For Objects?


Has your enterprise thought through the impact of bring
your own device (BYOD) like tablets and smartphones?

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
42
These devices have very different characteristics for
accessing corporate data than traditional computers,
and they want to have that data available any time, any
place. Although this availability is desirable, the corpo-
ration must ensure that the data (especially in transit)
is protected.
Many of your traditional applications that were devel-
oped for traditional client/server environments, such
as a PC and a mainframe or webserver, have to be
rethought in terms of the object (Put, Get, DEL) model
required for object access.

Can You Interoperate


with Legacy Devices?
What will happen to your legacy devices after you start
to replace your technology infrastructure? Will you
have to replace the existing devices by selling them at
scrap prices? A better option would be for you to
repurpose them as part of the new software appliances
so that you can continue to amortize your capital
expenditure (CAPEX) in storage.

Is your Storage and Networking


Technology Backed by Strong
Intellectual Property?
Why is it important that your storage vendor has a
strong Intellectual Property program? In the event that
the appliance manufacturer is sued for infringing on
someone else’s patent, you could lose access to the
software. Systems backed by a strong set of patents
that are licensed when you license the software pro-
vide a layer of protection for your company.
These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
43
If the vendor has a strong IP program, it has the strength
to countersue or offer to cross-license patents for even
stronger protection.

How Often Do You Need


to Upgrade Your Storage
Capacity?
This question is deceptively simple. Does your storage
management report how much of your capacity is
used? Do you have trend analyses that show you the
monthly/seasonal growth in your capacity needs? You
need this type of information to ensure that you have
the capacity to meet future demands.

Why is Software Defined


Storage not Virtualization?
Storage virtualization separates the capacity of stor-
age and the specifics of hardware resources from the
end-user. SDS involves much more by separating the
management capabilities and services of a class of
­storage from the storage hardware and devices.

What are the Six Tenets


of World-Class Storage?
The acronym S.M.A.R.T.S. stands for the six tenets of
world-class storage. The tenets are
✓ Security: Has your SDDC been hardened to p
­ revent
unauthorized access? LDAP? AD? Encrypted Data?

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
44
✓ Manageability: Is your SDDC simple and easy to
manage?
✓ Availability: Will your SDDC maintain five nines of
availability?
✓ Reliability: Will your SDDC maintain data integrity
to achieve five nines of loss-less data?
✓ Total Cost of Ownership: Has your SDDC lowered
your total cost of ownership (TCO)?
✓ Scalability: Can your SDDC scale to support all
the data, compute, and networking needs of your
end-users?

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Nexenta is the global leader in Software Defined Storage (SDS).
Nexenta delivers secure, easily-managed, highly-available,
reliable, and scalable storage software solutions via ultra-low
TCO. Nexenta solutions are hardware-, protocol-, workload-, and
app-agnostic, providing innovation freedom for organizations
to realize true benefits of cloud computing via virtualization-
enabled Software Defined Data Centers (SDDC). Nexenta
enables workloads from rich media-driven social media to
mobility; from Internet of Things to Big Data. Built on an
open source platform, Nexenta delivers software-only unified
storage management solutions with a global channel
partner network.

At Nexenta, we provide the SMARTS for your storage needs:

Security
Manageability
Availability
Reliability
Lower TCO
Scalability

For more information, visit nexenta.com.

These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited.
Compliments of
ition
Nexenta Special Ed
Find the advantages
to using SDDC
Open the book  
and find:
So f t wa r e D e f i n e d
Integrate off-the-shelf
technologies and virtualization
• What you need to  
know about Software
Defined Data Centers Data(SCDDeCn ters
to deliver unified storage, • If Software Defined
)
networking, and data center Storage is right for you
capabilities without locking
• How an SDDC lowers
in to proprietary hardware. TCO and improves
•  Discover SDDC — what it is and scalability
what it does • Why atomic storage
 witch to Software Defined
•  S is important
Storage — lower TCO and
improve scalability and
performance
•  Understand storage — optimize
the right underlying hardware Learn:
• What an SDDC is and what  
•  Get the right storage support —
it can do for you
see the solutions that support
your business requirements • How to move to Software Defined
Storage and software defined
networking

ISBN: 978-1-118-90356-8 Brian Underdahl


Not for resale Robert Novak

Das könnte Ihnen auch gefallen