Sie sind auf Seite 1von 46

STORAGE

Home

MANAGING THE INFORMATION THAT DRIVES THE ENTERPRISE

Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage

Best Storage
Products of 2015

2015 Products
of the Year

Backup shoppers
back big names

The envelope, please


and the winners are

Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection

SNAPSHOT 1

EDITORS NOTE / CASTAGNA

Backup hardware
shoppers prefer
big name vendors

Storage predictions
ridiculous or sublime?

HYPER-CONVERGENCE

STORAGE REVOLUTION / TOIGO

Deep dive into


hyper-converged
systems

The real meaning


of software-defined
storage

SNAPSHOT 2

HOT SPOTS / BUFFINGTON

Backup buyers
get discounts,
concessions

Media mashup
might be best for
data protection

PROTECT

READ-WRITE / KATO

See which primary


data protection tech
is best for your shop

New tools
for scaling NAS

Kato: New
approaches to
scale-out NAS
About us

FEBRUARY 2016, VOL. 14, NO. 12


STORAGE FEBRUARY 2016

EDITORS LETTER
RICH CASTAGNA
Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

Predictions
to give you a
leg up on 2016
Vendors and analysts share their
industry insights to help you prepare
for the rest of the year.

probably well past scratching out


2015 and writing 2016 on their checks (remember
checks?) and have already forgotten just exactly what New
Years resolutions they made.
But, hey, it is a new year, and youre all probably raring
to go and ready to take on new projects, bolstered by a still
mostly unspent budget and a lot of great expectations. Its
that traditional recharging of our psyches and souls, and
it fills us with all kinds of optimism.
And if we need a little nudge into the future of data storageeven just a subtle shovewe can always count on
the annual prognostications of storage industry pundits,
analysts, vendors andahemmaybe a storage editorial
writer.
Having already shared my forward-looking visions in
this space, I soon discovered that there was still a lot of
BY NOW, EVERYONES

ground to cover as my email inbox spilled over with dozensnay, scores!of predictions on the future of data
storage, pouring in from the aforementioned constituencies. Some were pretty insightful, most were blatantly
self-serving, and still others were good for a chuckle or
two. Without naming names, here are some inbox gems.
Contrary to popular belief, tape will not die in 2016

Tape is taking longer to die than Generalissimo Franco


did back in 1973 and 1974 and 1975. This prediction on the future of data storageboldly suggested by a
backup software vendor, not the LTO Consortiumgoes
on to make a very good point by adding that tape is still a
reliable and cheap offline medium for cold archive. The
prediction then swerves a bit in the opposite direction and
says that cloud storage will replace tape in some cases. So
maybe tape dies in 2017 ?
Orchestration and automation will ensure
smoother operations and shorter scaling time

This is another vendor contribution on the future of data


storageand a curious one at that. My mind might be
stuck in the storage world and maybe I dont see the big
picture, but isnt doing things faster and easier the whole
point of orchestration and automation? Maybe they just
didnt want to go too far out on a limb and make the data
center of the future sound too good.
STORAGE FEBRUARY 2016

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

Hardware is the new software,the emergence


of scale-In hardware architecture

Im not even sure I know what this one means. But if were
living in the age of software-defined storage and this prediction is correct, does that mean we should look forward
to hardware-defined storageor hardware-defined software? Whatever it means, Ill be sure to keep an eye out for
it in 2016. I guess what they really mean is that hardware
does actually matter and despite software-defined showing up in front of every conceivable data center thing these
days, we still have hardware and its still pretty darned
important. I couldnt agree morealthough I mightve
said it a little differently.

was the storage buzzword du jour. It actually means tape


as NAS so feel free to toss that acronym around.
High-profile security breaches are set to continue
in 2016, and more executives will become the targets
of hackers

Those of us who arent executives can breathe a sigh of


reliefand try to remember not to stand too close to any
executives in 2016. Im not sure it takes all that much
insight or expertise to predict that something bad thats
been virtually unchecked for years will get worse in the
coming year. A hint or two about what we should do about
it wouldve been nice.

Dell-EMC chaos will ensue

Extinction of the IT specialist

I suppose this prediction hinges on your interpretation of


the word chaos. And, to be honest, this prediction came
from a vendor ofyes, you guessed itstorage systems.
So this is more of a prayer than a prediction.

Thats a pretty gutsy prediction to makeespecially by


a storage product vendor. I think theyre predicting that
their customers will go the way of the dinosaurs. (I have
this picture of IT specialists clinging to their beloved tape
libraries, and together being hauled out of data centers
and dumped into museums.) But there was a serious part
of this prediction that describes a world where everyone
in the data center is a generalist, and everything has to be
simple and manageable through a single pane of glass. So
its not really about extinction, but rather about IT utopia.
Thanks to all of these vendors, analysts and perceptive
pundits for sharing their visions of the nearand farfuture of data storage and its place in our data centers. I dont
know about you, but Im optimistic about 2016. n

What excites me about the future of tape


is the new use cases that LTFS is opening up

In this case, the prognosticators excitement is understandable as this prediction on the future of data storage
does come from one of the LTO Consortium members. I
was once excited by LTFS, but its been many years now
and there are only a handful of products built around that
technology. What really caught my eye, however, was the
use of the acronym tNAS. That got me revving up the
Google machine to see who minted that acronym. Frankly,
I was afraid that I wasnt in the know and maybe tNAS

RICH CASTAGNA

is TechTargets VP of Editorial.

STORAGE FEBRUARY 2016

STORAGE REVOLUTION
JON TOIGO
Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

Defining
softwaredefined storage
Most of the software-defined stuff we
see is rehashed or proprietary tech, but
two products stand out in this crowd.

FOR A FEW years now, folks have been pushing the idea that
server virtualizationand its correlatives, software-defined networks and software-defined storageis a panacea for everything thats been holding back IT service
delivery.
The mantra is easy to grasp: less hardware and centralized management equals lower cost of operation and
reduced total cost of ownership. But reality has not seen
the delivery of this value proposition in very many shops
and many operators, including me, have been losing our
collective religion.
In the storage realm, the software-defined storage
(SDS) market has been hampered by the limited vision
and proprietary objectives of the leading hypervisor
vendors.

On the one hand, the software-defined storage market


has been advanced as a purported fix for the disappointing performance of virtualized applicationsthough the
actual performance issue rarely has anything to do with
storage I/O latency.
On the other hand, hypervisor vendors have advanced
their interpretation of SDS functionalityincluding some
storage services, excluding othersin an apparent effort
to create and reinforce silos of technology that only work
with data from their virtualized workloads and preferred
vendor hardware.

WHATS IMPRESSIVE ABOUT THE


SOFTWARE-DEFINED STORAGE MARKET?

There are really only two things Ive seen in the software-defined storage market over the past year that have
even remotely impressed:
n

The delivery of specialized archival gateway appliances

as virtual machines (for those who want to keep their


hardware footprint on the smallish side while still practicing the commonsensical and essential practice of data
archiving)
n

The

introduction of adaptive parallel I/O technology,


which leverages a full SDS stack to optimize raw storage
STORAGE FEBRUARY 2016

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

I/O throughput and delivers ridiculously fast I/O processing at an extraordinarily low cost per I/O
The archival gateway virtual machine (VM) is an innovation from Crossroads Systems, which was formally
announced at the end of 2015. Crossroads has been delivering archive gateways for quite some time under the
brand name StrongBox. They were among the first to see
the promise of the Linear Tape File System (LTFS) as a
means to bridge production file system-based storage to
file and object archive, preferably using tape or tape cloud.
The latter, the tape cloud, was pioneered by Crossroads
partner, Fujifilm, a company that continues to do the hard
work of growing the efficiencies, capacity and resiliency
of magnetic tape.
Crossroads smartly enabled a generic server with a
preinstalled version of LTFS to give it the ability to take
files and objects from production storage and move them
transparently to extremely high-capacity tape in accordance with archival policy. They initially targeted the two
leading repositories of files, NetApp filers and Microsoft
file servers, and made short work of the integration of
production and archive data storage bridging.
The relationship with Fujifilm gave them the ability to
extend the bridge across a WAN to an archival cloud service called the Dternity Media Cloud. Using tape provides
a means to seed the Dternity cloud when there is too much
data to transfer cost effectively across a wire, and also as
a means to retrieve a substantial amount of data from the
archive when necessary and re-deploy it into production
storage.

HOW TWO VENDORS HELPED


THE SOFTWARE-DEFINED STORAGE MARKET

Both the StrongBox appliance and the Dternity gateway


and media cloud service were a win-win for both software-defined storage vendors and users. But there was
still a challenge. Some customers didnt want to deploy
another servereven a cool archive gateway appliance.
Given their efforts to consolidate and reduce the number
of servers via server virtualization, deploying a specialized
server seemed to be a bit of backsliding. So, Crossroads
innovated again, delivering its StrongBox for Fujifilm
Dternity gateway appliance as a virtual machine capable of
running under your favorite hypervisors, starting initially
with ESXi. That bit of out-of-the-box thinking may well
make archive much more commonplace in virtual server
settings. You can download a free 90-day trial version of
Strongbox VM from Crossroads Systems to test in your
own shop.
That takes care of what I call retention storage which
is the second part of the contemporary storage paradigm.
Retention storage is where we put data that we need to
retain but rarely if ever access or update. As much as 70%
of your current data probably belongs in retention storage,
and tape archive is clearly the cost-effective choice.

MAKE ROOM FOR CAPTURE STORAGE


IN YOUR INFRASTRUCTURE

The other part of your infrastructure is capture storagethe storage thats optimized for high-performance
access and fast IOPS. Theres no shortage of kit makers
STORAGE FEBRUARY 2016

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype

in the software-defined storage market who want to sell


us the fastest flash arrays or the cleanest and most stovepiped VMware Virtual Volumes (VVOLs) products to
expedite I/O handling. But the breakthrough that we saw
with the release of the SPC-1 benchmark around DataCore
Softwares Adaptive Parallel I/O in December 2015, and
the delivery this month of software-defined storage enabled with that technology, takes the cake.
Go visit the Storage Performance Council report for
yourself and see how DataCore managed to get the lowest
cost per IOPS ($.08 per IOPS) in history using commodity
disk, flash and server equipment along with its own storage virtualization software. That story is poised to improve

over the next few months, as the company has its second
round of SPC-1 benchmark tests certified, showing how
you can squeeze over a million IOPS out of an economical
server/storage kit of your own choice.
While I have been listening to the woo peddlers around
software-defined-everything basically bend the ideas to
serve their proprietary ends, it is rewarding to see some
reasons to keep the faith in the software-defined storage
market. These are two. n
JON WILLIAM TOIGO is a 30-year IT veteran, CEO and managing
principal of Toigo Partners International, and chairman of the Data
Management Institute.

Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

STORAGE FEBRUARY 2016

PRODUCTS OF THE YEAR

Storage Products
of the Year

2015
Find out which 18 products from the past year
are on the cutting edge of technology.

BY PAUL CROCETTI, GARRY KRANZ, SONIA LELII, DAVE RAFFO,


CAROL SLIWA, SARAH WILSON AND ED HANNAN

invite technology vendors to submit their


nominations for our annual Storage magazine/SearchStorage Products of the Year, which recognize the previous years most innovative data storage products. Our sole
focus is on products introduced or significantly upgraded
within the past year. This results in new or lesser-known
products faring quite well against more recognizable products if those products have not been vigorously upgraded.
Ultimately, these awards are intended to recognize
innovations in technology. But innovation is not the only
considerationthe product has to work well and make its
users happy. Our judges also consider performance, ease
of integration and ease of use, along with manageability,
functionality and value. Our judging panel consists of users, analysts and consultants, along with Storage magazine
and SearchStorage writers and editors.
Categories included backup and disaster recovery software and services, backup hardware, storage management
tools and storage system software. As was the case last
year, we separated the storage systems category into allflash systems and disk/hybrid systems.
To the 18 Products of the Year winners profiled on the
following pages, we offer hearty congratulations!
EACH YEAR, WE

HOME
STORAGE FEBRUARY 2016

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

BACKUP AND DISASTER RECOVERY SOFTWARE AND SERVICES

Veeam Availability Suite v8


Veeam Availability Suite v8 offers the companys virtual
server backup software and its Veeam ONE monitoring
software in a single platform and introduces more than
200 new features and enhancements.
Highlights include Veeam Cloud Connect, integration
with NetApp and built-in WAN acceleration for VM replication. Veeam Availability Suite v8 leverages virtualization, storage and cloud technologies to deliver recovery
time objectives of less than 15 minutes.
The software provides 24/7 data center availability.
Veeam Cloud Connect gives Veeam partners a platform
that lets them offer backup hosting as a service to Veeams
customer base. Backup I/O Control ensures a production
workloads availability by controlling the impact of backup
and replication jobs on production VMs. Snapshot Hunter
detects and automatically consolidates hidden and stuck
VM snapshots, preventing data stores from overfilling
with snapshot files and stopping production VMs.
Veeam ONE is a monitoring, reporting and capacity
planning tool for VMware vSphere, Microsoft Hyper-V
and the Veeam backup infrastructure. It provides complete visibility of the IT environment to detect issues
before they impact operations.
Veeam Availability Suite v8 works with whatever production and backup storage is in place, providing the freedom to change storage providers over time or use a mix,
according to the company. Veeam also integrates with

GOLD

storage products from Hewlett Packard Enterprise, EMC


and NetApp to provide both advanced snapshot support
functionality and backup storage integration.
One judge called Veeam Availability Suite v8 a solid
update that solves other VM data protection problems
and continues to raise the bar with functions such as the
backup hunter, backup throttling, WAN acceleration and
more. Another judge noted that customers seem to be
very happy.

The Veeam Availability Suite v8 standard edition costs


$1,100 per socket, and requires a Microsoft Hyper-V or
VMware vSphere environment and a Windows-based
server. Veeam does not charge per terabyte, for application
support, or for any of its product components, which are
unlimited for users. n
STORAGE FEBRUARY 2016

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names

Asigra Cloud Backup Version 13


Asigra Cloud Backup Version 13 offers a range of data
protection capabilities. This update of the enterprise
backup software platform includes a new snapshot manager and support for Docker container backup.
The Asigra Cloud Backup software protects Docker
containers by eliminating the need for disruptive software
agents, improving management, performance and secu-

Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

SILVER

BACKUP AND DISASTER RECOVERY SOFTWARE AND SERVICES

rity. Organizations can use Docker to move containers


from one cloud to another, or from a cloud to an on-premises system and vice versa. Together with Docker, Asigra
provides flexibility as it can be used cloud-to-cloud, cloud
to on-premises, and on-premises to cloud. Users can set
automatic protection to protect data in both a local and
off-site backup vault.

Asigras Amazon Web Services (AWS) Elastic Block


Store (EBS) Snapshot Manager allows Asigra Cloud
Backup users to quickly and easily create a snapshot
schedule through a Web-based graphical user interface
that automates taking snapshots of one or more AWS Elastic Compute Cloud instances. This automation eliminates
the need to write complex scripts, reducing development
time, management requirements and other resources.
Both Docker and AWS are highly popular computing
platforms that are emerging as credible storage platforms
for business information. Asigra says it is the first enterprise software vendor to add support for these platforms.
One judge praised Asigra for being early to market with
the new capabilities. Another said that in addition to being
very inexpensive, the Asigra Cloud Backup software is
unique in its automatic protection of Docker containers
and its ability to manage AWS EBS snapshots.
Asigra AWS EBS Snapshot Manager is free and can be
downloaded without obligation. The Docker container
backup is available in Version 13 of Asigra Cloud Backup
and must be acquired through an Asigra partner; estimated starting price is 15 cents to 20 cents per gigabyte,
per month.
The Docker container capability is a new feature included in Asigra Cloud Backup Version 13. Asigra AWS
EBS Snapshot Manager is a standalone product that can
be used independently. n
STORAGE FEBRUARY 2016

10

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

BACKUP AND DISASTER RECOVERY SOFTWARE AND SERVICES

Druva inSync 5.5


Druva inSync 5.5 endpoint backup software helps companies address the security and compliance risks created
by workforce mobility, adoption of cloud applications, the
BYOD trend and consumer file-sharing tools.
The Druva inSync 5.5 software monitors and notifies
the user of compliance risks for sensitive data at rest,
using preconfigured and customizable templates. The collection and
backup of cloud application data
provides a single access point to
view and manage user data across
separate data sources. A full-text
search feature for e-discovery identifies the location of litigation-related
files across all data sources, including file versions, deleted files and
former employees files.
The Druva inSync 5.5 cloud privacy framework provides
organizations with granular data privacy management,
while envelope encryption, data/metadata separation and
block-level storage prevent Druva from accessing data that
it stores in cloud storage services.
Through Druva inSync 5.5, an IT department has full
visibility into corporate data stored on any endpoint, such
as laptops and mobile devices, or a cloud app like Office
365, which facilitates collection, data protection and policy enforcement.

BRONZE

The Druva inSync software creates a master record of


all endpoint and cloud app data, so any file can be easily
recovered in the event of device loss. Data required for
compliance audits, leak investigations and litigation
requests can be supplied without physically searching
devices or multiple cloud apps.
Druva says its design approach
ensures data privacy, global scalability and rapid transfer speeds
that overcome the usual limitations
of cloud-based data protection. The
software takes advantage of Druvas
patented global source-side deduplication technology to reduce the
storage and bandwidth required for
backups and file sharing.
One judge called Druva inSync 5.5
a solid update from an endpoint data protection leader,
praising its innovative proactive compliance and a more
complete search. Druva continues to excel in endpoint
backup, compliance, security, and sync and share, the
judge said.
Another judge noted that the Druva inSync 5.5 software is a very scalable endpoint solution with strong
governance.
The base price begins at $6 per month, per user for
cloud business and does not require hardware. n
STORAGE FEBRUARY 2016

11

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

BACKUP HARDWARE

ExaGrid EX32000E
The ExaGrid EX32000E backup deduplication appliance
is a scale-out disk system built on ExaGrids GRID architecture, with the latest 4.9 version upgraded to support
RMAN for Oracle databases. The two previous upgrades
increased capacity from 14 to 25 appliances in a GRID and
increased the number of data centers supported to 16 for
cross-site disaster recovery.
The company targets small to midrange customers with
10 appliance models of various sizes that can be installed
in a configuration that does deduplication and replication
without interfering with the backup process. The ExaGrid
EX32000E uses adaptive deduplication where the deduplication and replication are done in parallel with backup
while system resources are provided to the backups to
reduce the backup window.
The backups are written directly to the landing zone
with the most recent backups maintained in their full
original state ready for requests. The ExaGrid EX32000E
system does instant recovery of virtual machines from the
landing zone. If the primary virtual machine is not available, the administrator can recover and run a virtual machine from the ExaGrid system within minutes. The local
restores, instant virtual machine recoveries, audit copies
and tape copies do not require rehydration. Instant virtual
machine recovery occurs in seconds to minutes much
faster than inline deduplication, which requires data rehydration. ExaGrid EX32000E backups occur in parallel

GOLD

with deduplication and off-site replication. The company


calls this adaptive deduplication because the replication
and deduplication do not affect the backup process.
ExaGrids EX32000E can ingest 7.56 TB an hour with up
to 25 appliances in a single, scale-out GRID architecture.
Storage and ingest capacity are increased as data volumes
increase so there is no effect to the length of the backup
window. The systems are plug-and-play, with each added
appliance automatically discovered in the GRID. ExaGrid
EX32000E appliance models can support from 1 TB to 32
TB and can be added as needed to the GRID. n

STORAGE FEBRUARY 2016

12

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

SILVER

BACKUP HARDWARE

NEC Corporation HYDRAStor v4.4


NECs HYDRAstor v4.4 garnered second place in the
2015 product of the year awards for backup hardware as
judges lauded it as the most scalable target dedupe appliance. The scale-out, Linux-based system can scale linearly
up to 165 hybrid storage nodes.
Its amazingly engineered but often ignored, said one
of our judges of NEC HYDRAstor v4.4.
Its very feature-rich and it has great
OST.
The latest 4.4 version of the NEC
HYDRAstor introduced the Universal
Deduped Transfer feature that eliminates the need to transfer duplicated
data blocks from applications to HYDRAstor. The capability is a sourceside deduplication that leverages
server resources to reduce bandwidth
usage. It does not require any application-specific integration so multiple
applications are serviced once it is
deployed via a standard file-system
interface.
With the Universal Deduped Transfer, a single-controller hybrid node has
a maximum throughput for general
application that increased from 4.9
TB per hour to 40 TB per hour, per

controller. It can scale out up to 165 hybrid storage nodes


and has a performance capability of from 1 PB per hour up
to 4 PB per hour with inline global deduplication.
The NEC HYDRAstor systems distributed architecture
can meet the needs for the low end and then can scale to
handle larger datasets. The system can expand and be refreshed online with no data migration
necessary.
The new version of the NEC HYDRAstor also introduced a new OST
Accelerator support for Veritas NetBackup to automate and speed the
backup process. This accelerator reduces the backup window because it
offloads synthetic full-backup processing from the media server to HYDRAstor and automates the synthesis of the
next full backup as soon as the new
incremental backup is received. The
accelerator allows the user to eliminate the weekly full backup from the
job schedule and maintain an up-todate full backup image with only daily
incrementals.
NEC HYDRAstor is a scale-out system, built on an object store with advance erasure coding instead of RAID. n
STORAGE FEBRUARY 2016

13

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

BRONZE

BACKUP HARDWARE

Oracle Zero Data Loss Recovery Appliance


The Oracle Zero Data Loss Recovery Appliance is a
scale-out device that functions as a relational database
management system (RDBMS) data protection target,
with the ability to do linear backup and restores scaling
from 580 TB to more than 10 PB of usable capacity that it
can move at 216 TB per hour.
Introduced last year, the appliance is built only for
Oracle database protection, doing continuous protection
for critical databases while offloading
backup processes from production servers to reduce overhead. The Oracle
Zero Data Loss Recovery Appliance is
integrated with RMAN and Oracle Enterprise Manager.
The Oracle Zero Data Loss Recovery
Appliance is designed to drastically reduce data-loss exposure, reducing it to
sub-seconds. Its architecture uses a delta-push and delta-store method in which
it only sends and stores changed blocks
from the database to the appliance.
The system creates virtual full copies
during the restores from the blocks and
delta store to help cut down on the need
for lengthy backups. When taking into
account virtual full backups, Oracle says
the system performs at 2 PB per hour.

The continuous, incremental backup process captures


real-time database changes without impacting production
performance.
The Oracle Zero Data Loss Recovery Appliance also can
directly archive database backups to tape storage, with
the archiving operations running 24/7 to improve drive
utilization. Its changed data-store can be used to create
virtual full database copies at any desired point-in-time,
and the appliance can replicate data in
real-time to a remote Oracle recovery
appliance or to an Oracle Database
Backup Cloud service. Database blocks
are continuously validated to eliminate
data corruption during the transmission
process.
One judge called the Oracle Zero Data
Loss Recovery Appliance unique and
said it goes beyond RMAN integration.
Its RDBMS-controlled data protection with huge levels of compression
and zero RPO and RTO, he said. For
non-stop business continuity environments, its an excellent value.
The hardware is a base rack with a
minimum configuration of two compute
nodes and three storage servers that can
scale up to 18 nodes. n
STORAGE FEBRUARY 2016

14

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

GOLD

STORAGE MANAGEMENT TOOLS

Data Dynamics Inc. StorageX 7.6


Judges agreed that Data Dynamics Inc. StorageX 7.6 file
management softwares ease of integration, ease of use and
innovation set it apart from other finalists in the storage
management tools category.
StorageX software provides users with a unified dashboard view into all storage in a multi-vendor environment,
and is used for data lifecycle management. It uses policy-driven automation to migrate and tier unstructured
data in a distributed file system (DFS) using a three-phase
copy process to minimize user disruption.
The Data Dynamics Inc. software also features quality
of service (QoS) bandwidth throttling to maintain performance. When files are being migrated, StorageX automatically updates DFS namespaces so that manual LAN
or WAN routing is not required
relieving one of the biggest pain
points for administrators dealing
with DFS migration.
StorageX version 7.6 added API
integration to NetApp Data ONTAP and EMC Isilon and VNX,
which allows the use of features
native to those storage systems.
This upgrade also added distributed file system management
features including replication
and high availability. Additional

updates to this version include the ability to create custom reports, automatic validation of file storage and DFS
namespace resources, and role-based access to StorageX
features.
StorageX was scored the highest among the group of
nine finalists, which included automation, provisioning
and performance monitoring tools. Judges gave the Data
Dynamics Inc. products the highest scores for its ease of
use and manageability, and for performance and functionality. One judge said StorageX 7.6 was an essential update
to the product because it makes the product more competitive with other offerings and offers several advantages
such as automated file storage management. Another
judge said it was unparalleled for file migration needs
and predicted Data Dynamics Inc.
StorageX software could save everyone 30% of new storage costs.
Judges also acknowledged that
Microsoft DFS management is
a narrow market, but one that is
worth pursuing because the file
systems are often complicated
and lack management features.
Data Dynamics Inc. StorageX
7.6 is sold through the channel
and the list price starts at $500
per TB. n
STORAGE FEBRUARY 2016

15

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

SILVER

STORAGE MANAGEMENT TOOLS

Dell Inc. Foglight for Storage Management 4.0


Dell Foglight for Storage Management was upgraded
in April 2015 and judges agreed the added support for Hyper-V and its resource management capabilities made it a
very competitive alternative in the storage management
tools market.
Dell Foglight version 4.0 monitors and analyzes storage
performance and availability in virtual environments.
The product is installed as a virtual appliance and can be
accessed through a standard Web browser. The singlepane-of-glass monitoring view spans both physical and
virtual components in a data centera big plus for administrators of virtual environments as the two often have
to be monitored separately. The monitoring tracks virtual

machine performance so Dell Foglight can proactively


predict the effects of changes to the environment. The tool
is also able to detect over-provisioned virtual machines
and other underused resources so that the capacity can
be reclaimed and used more efficiently.
In version 4.0, Dell added storage capacity planning
capabilities. Foglight estimates when a storage pool will
exhaust all of its capacity so that administrators can plan
ahead. Administrators can also choose to review reports
across all storage arrays in an environment, or reports on
a single array for a more granular view. According to Dell,
these reports display real-time statistics as well as historical consumption charts, and long-term and short-term
estimated trends to alert users of how much time they
have left before specific storage pools run out of capacity.
With the addition of Hyper-V support, Foglight is now
compatible with all major hypervisors: VMware, OpenStack and KVM are also included.
Judges gave Foglight for Storage Management high
scores for its ease of use, ease of integration and performance, citing that adding Hyper-V support was smart,
and that the product has a better than average root cause
analysis troubleshooting tool.
Dell acquired Foglight technology in June 2012 when
it bought Quest Software for $2.4 billion. Pricing for Dell
Foglight for Storage Management starts at $499 per CPU
socket. n
STORAGE FEBRUARY 2016

16

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

BRONZE

STORAGE MANAGEMENT TOOLS

ProphetStor Data Services, Inc. Federator SDS 3.0


In a category of tools focused on one aspect of storage
management, breadth of functionality is what made
ProphetStors Federator SDS version 3.0 stand out.
Federator SDS provides data services, automation and
analytics. It is an out-of-band storage controller for heterogeneous scale-up or scale-out storage environments.
When installed, it automatically detects storage hardware
in an environment and pools
the capacity in an abstracted
layer so that all the functionality can be exposed to the user.
Users can then access that storage and the management and
analytics functions via a GUI or
an open REST-based API.
Data services offered by Federator SDS include data migration, copy management,
business continuity and disaster recovery. Storage features
provided by storage hardware such as thin provisioning,
deduplication, compression and encryption can be surfaced for all abstracted capacity.
Version 3.0 of the Federator SDS was a significant
upgrade that ProphetStor claims included features already on their roadmap as well as improvements based
on customer feedback. Also called the Catalina release,

the upgrade included the ability to virtualize storage volumes up to 16 exabytes in size. Storage tiering and data
migration capability was also improved. Users can now
use pre-defined or custom workload profiles to maximize
the utilization of flash and caches. Another new feature of
Federator 3.0 is dynamic quality of service, which has the
ability to throttle throughput to guarantee performance.
Support was also expanded in
this release: VMware vSphere
APIs for Array Integration and
NFS are now compatible with
Federator SDS.
ProphetStor claims Federator SDS is a cross-system
manager that was developed to
break vendor lock-in and create a horizontally orchestrated
storage platform.
However, one judge claimed
ProphetStor could have a hard
time getting support from other storage vendors because
it commoditizes storage control. That judge also noted the
products innovation could make up for any potential support issues and still help it move farther into the market.
The out-of-band nature and ability to federate is clever,
fully functional and innovative, giving it a reasonable
chance for success, he said. n
STORAGE FEBRUARY 2016

17

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

GOLD

STORAGE SYSTEM SOFTWARE

DataCore SANsymphony-V Adaptive Parallel I/O


After third-place finishes in 2010 and 2013, DataCore
SANsymphony-V Adaptive Parallel I/O software finally
broke through to win the gold award in 2015.
The DataCore SANsymphony-V storage software lets
multicore servers use the processing power of all available cores to execute and schedule multiple I/O threads
to eliminate bottlenecks, boost application performance,
and facilitate the consolidation of virtual machines (VMs),
application workloads and physical servers.
Many have tried to address the performance problem
at the device level by adding solid-state storage (flash) to
meet the increasing demands of enterprise applications
or by hard-wiring these fast devices to VMs in hyper-converged systems. However, improving the performance of
the storage mediawhich replacing spinning disks with
flash attempts to doonly addresses one aspect of the I/O
stack: read performance, said George Teixeira, CEO and
co-founder of DataCore, in a company white paper.
DataCore SANsymphony-V Adaptive Parallel I/O
software can enable the processing of more data storage
requests in a given time frame and accelerate an applications ability to both read and write to storage. The I/O requests would otherwise be waiting in line to get serviced.
Our judging panel rated DataCores Adaptive Parallel
I/O first in performance among the 13 finalists in the
Storage System Software category. The product also tied
for first place in innovation and value, and ranked second

in functionality.
One judge said there is nothing else like DataCores
Adaptive Parallel I/O on the market, and he called the
software potentially revolutionary. Another judge noted
that most software-defined storage products, or storage
virtualization, are not capable of parallel I/O. More people ought to be evaluating SANsymphony, said one judge.
The Adaptive Parallel I/O software is an upgrade to
the DataCore SANsymphony-V10 storage virtualization
product, which can pool capacity across heterogeneous
storage hardware and provide storage management capabilities. The DataCore SANsymphony-V software can run
on standard x86 servers. Pricing starts at less than $5,000
per node.
One judge cautioned that the DataCore software is designed for storage in Microsoft Windows Server environments. He said DataCore needs a Linux version. n

STORAGE FEBRUARY 2016

18

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

SILVER

STORAGE SYSTEM SOFTWARE

Hedvig Inc. Hedvig Distributed Storage Platform


The Hedvig Distributed Storage Platform snared the
silver award in our Products of the Year competition for
storage system software less than a year after Hedvig Inc.
launched from stealth.
The Hedvig software runs on off-the-shelf x86 and ARM
servers, and supports in-software provisioning of block,
file and object storage. Customers can deploy the software-defined storage on premise and in public clouds and
in hyperscale or hyper-converged mode. They can scale
the system from several terabytes to petabytes by adding
additional commodity servers and manage the storage as
a single pool.

Granular storage policies enable the provisioning of


capabilities such as inline deduplication and compression,
snapshots and clones by application, virtual machine
(VM) or container. Additional product features include
a Web interface to enable IT administrators to provision
storage from any device and built-in, tunable multi-site
replication to send up to six copies of data to various sites
or clouds.
The Hedvig Distributed Storage Platform supports
a wide range of hypervisors and operating systems and
provides a broad set of RESTful APIs, drivers and plug-ins
to enable integration with technologies such as Docker
containers, Hadoop, NoSQL, OpenStack and VMware
vCenter.
Our panel of judges scored the Hedvig Distributed
Storage Platform first in functionality; it tied for first in
innovation as well as ease of use and manageability. The
product also ranked second in value and tied for second
in ease of integration among the 13 finalists in the competitions storage system software category.
One judge labeled the Hedvig Distributed Storage Platform as the best new competitor. He said the product
certainly checks all the boxes.
Hedvig customers have two purchase options: a perpetual, capacity-based license starting at 50 cents per GB or
an annual subscription starting at $5,000 per server. Both
options include all features and capabilities. n
STORAGE FEBRUARY 2016

19

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

BRONZE

STORAGE SYSTEM SOFTWARE

Zadara Virtual Private Storage Arrays OPaaS 15.07


The June 2015 release of Zadara Storage Inc.s Virtual
Private Storage Arrays (VPSA) on-premise as a service
(OPaaS) packed in enough substantial new features to
merit the bronze award in the storage system software
category.
Zadara Storage provides on-demand, enterprise-grade
block and file storagebased in part on OpenStack
technologywith dedicated resources on premise, in
the cloud or in multiple locations. The companys VPSA
OPaaS version 15.07 added support for snapshot-based
backup to Amazon S3-compatible targets, multi-zone high
availability (through geographically distributed replication across a metropolitan area network), Microsofts Volume Shadow Copy Service (VSS) and Docker containers.
Zadara Storage COO Noam Shendar told SearchStorage
that the company integrated the Docker container capability to enable customers to run applications or arbitrary
code within the storage system rather than at the server.
Think of this as hyper-convergence backwards. Instead
of running compute with storage inside the compute, this
is storage with compute inside the storage, said Shendar.
Scoring from the judging panel rated Zadaras VPSA
OPaaS in a tie for first place for value as well as ease of use
and manageability, in third place in both innovation and
performance, and tied for third in functionality among the
13 finalists in the category.
Clever, useful, effective cloud-based servicepublic,

private and hybridmakes it quite innovative, especially


with the guaranteed [quality of service] QoS, said one
judge.
Zadara Storage delivers the storage system to the customer site at no cost, and after the storage is racked and
connected, contacts the system via external network to
configure it based on the users needs. The Zadara system
handles monitoring, management and maintenance. The

software supports iSCSI Extensions for RDMA (iSER) to


boost performance and reduce latency.
Customers consume the storage via a Web-based interface and pay based on consumption. Pricing starts at $0.01
per GB, per month for a fully managed, on-premise system, inclusive of hardware, software, remote monitoring
and management, support and service-level agreement. n
STORAGE FEBRUARY 2016

20

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

STORAGE SYSTEMS: ALL-FLASH SYSTEMS

SolidFire SF9605
SolidFire only does all-flash storage, beginning in
2011 during the early days of all-flash in the enterprise.
The startups main innovation comes from its operating
system software that delivers features such as the quality
of service (QoS), which helped SolidFire distinguish its
products value early on.
In 2015, SolidFire, which has
since been acquired by Net-App,
rolled out version eight of its
Element OS, a version known
as Oxygen, and the SolidFire
SF9605 storage node in 2015.
The SolidFire SF9605 scales to
44.5 TB of effective capacity and
50,000 predictable input/output operations per second.
Features added to Element over the years have helped
SolidFire transition from a vendor selling exclusively to
service providers to one with customers split between
providers and large enterprises. Advanced data protection
functionality and expanded multi-tenant security were
key focuses of Oxygen.
New features include synchronous replication, and
snapshot replication and scheduling. Synchronous replication writes data coming into the system to primary
and secondary sites at the same time, and is an important
feature for enterprises.
SolidFire already supportedasynchronous replication,

GOLD

which writes data to primary storage first and then copies


it to the secondary array. SolidFires snapshot replication
allows customers to copy snapshots to a second site.
Administrators can use the scheduler to determine the
number of recurring snapshots and how long they stay
on the array.
SolidFire expanded its VLAN
tagging to 256 VLANs that allows
for 256 secure, logically isolated
per-tenant storage networks on
one platform. It also added LDAP
Authentication support for security. Oxygen also enables support
for 256 secure, logically isolated
per-tenant storage networks on one platform.
SolidFire allows its feature set to be delivered as a service, inside its arrays or as software-only for industry-standard flash hardware.
The SolidFire SF9605 scored highest among our
judges in innovation among all-flash finalists. The SolidFire SF9605 also won solid marks for ease of use and
functionality.
Highly innovative for multi-tenant, shared infrastructure or managed service providers, especially with guaranteed QoS, one judge commented of the SolidFire SF9605.
Its new feature functions expanded on that while bringing it up to par on enterprise-required features. n
STORAGE FEBRUARY 2016

21

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

STORAGE SYSTEMS: ALL-FLASH SYSTEMS

Tintri VMstore T5000


Tintri, known best for its virtual machine-aware storage, was a late arrival to all-flash storage. Tintri arrays had
flash in them from the start, but they were hybrids that
used solid-state drives to handle all writes and most IOPS
in the system. Around 15% of total capacity on a Tintri
hybrid array is solid state with SATA hard drives making
up the rest.
The arrival of the Tintri VMstore T5000 All-Flash
shows how all-flash is becoming almost mandatory for
storage vendors. It also comes as no surprise that the
T5000 takes the same VM-aware approach to storage as
other Tintri VMstore arrays.
The Tintri VMstore T5000 can scale to support 5,000
VMs in just two rack units. It can isolate each VM in its
own lane through quality of service, and allow for provisioning, replication, analytics and management at the
VM level. The T5000 uses the same OS as Tintris T800
Hybrid-Flash arrays, so admins can balance workloads
across all-flash and hybrid systems and manage their entire footprint from one pane of glass. The system supports
any server hypervisor. The all-flash arrays also make use of
data deduplication that is built into Tintris hybrid arrays.
The Tintri VMstore T5000 uses a 12 Gbps SAS backplane to 24 SSD drives for up to 23 TB of raw capacity.
It is a dual-controller system, with each controller supporting up to four 10-Gigabit Ethernet connections. The
T5000 uses a 64 Gbps non-transparent bridge between

SILVER

controllers for high speed with data integrity. Tintri claims


the T5000 can deliver sub-millisecond latency, and that
the system can be installed and configured in less than 40
minutes.
As with all Tintri storage, VM-aware management
means customers dont have to provision LUNs or deal
with NAS mount points.

VMstore was very innovative when it first came out


several years ago, and this iteration evolves it, one judge
said.
The Tintri VMstore T5000 All-Flash series consists
of the T5080 and T5060. The T5080 holds 23 TB of raw
capacity and the T5060 holds 11.5 TB and can handle
2,500 VMs. Both are 2U systems. Pricing for the Tintri
VMstore T5060 starts at $250,000 for 11.5 TB of raw (36
TB) effective capacity. n
STORAGE FEBRUARY 2016

22

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

BRONZE

STORAGE SYSTEMS: ALL-FLASH SYSTEMS

Hewlett Packard Enterprise 3PAR StoreServ 20850


Hewlett Packard Enterprise (HPE) goes against common wisdom that an all-flash array has to be built from
the ground up for flash to unlock the complete value.
HPE uses its 3PAR StoreServ flagship storage platform for
all-flash and hybrid storage, claiming the original 3PAR
design required only tweaks to take full advantage of flash.
The HPE 3PAR StoreServ 20850 is the vendors highest
capacity all-flash system, holding 1,024 solid-state drives
(SSDs), and HPE claims it can deliver more than three million IOPS with sub-millisecond latency. The commercial MLC SSDs
range from 480 GB to 3.84 TB. The
3PAR Gen5 ASIC for silicon-based
hardware acceleration handles inline deduplication with little drag
on performance. The ASIC also
enables thin provisioning.
The HPE 3PAR StoreServ
20850 includes eight controllers
that form a Mesh-Active cluster
for load balancing. Adaptive Sparing technology, a feature of the HP
3PAR operating system, extends
flash-based media endurance by
adjusting the systems sparing approach to help minimize impact
on SSDs. Persistent Checksum

offers data protection from host to array to guard against


silent corruption.
Pricing for the 20850 begins at around $110,000 for two
drive enclosures, 3.84 TB of capacity and the 3PAR OS.
HPE said the system can cost as low as $1.50 per usable
GB, factoring in data reduction.
The HPE 3PAR StoreServ 20850 won the highest overall scores from judges for performance and functionality
of any all-flash array finalist.
The breadth of products using one architecture approach
is impressive, one judge commented. The 20850 leverages
both scale-up and scale-out
architecture.
HPE 3PAR has had a strong history winning Products of the Year,
dating to before HPE acquired the
company in 2010. The 3PAR Utility Storage System won a bronze
in the disk system category in
2002, the first year of the awards.
The 3Par InServ F400 Storage array also won a gold medal in 2009,
and the HPE 3PAR StorServ 7450
All-Flash Storage system was the
all-flash gold medalist last year. n
STORAGE FEBRUARY 2016

23

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS

STORAGE SYSTEMS: DISK AND HYBRID SYSTEMS

Qumulo Inc. Qumulo Core


Startup Qumulo Inc. combines data-aware scale-out
NAS with built-in analytics for traditional file systems and
object storage. Designed for petabyte-scale deployments,
Qumulo Core storage is designed to intelligently manage
billons of data objects. Engineered by inventors of the Isilon scale-out NAS (now owned by EMC), Qumulo Core is
a software-only installation that runs atop a Linux-based
hybrid flash storage system on pre-validated commodity
hardware, dedicated servers or virtual machines.
The Qumulo Scalable File System curates, manages
and stores data. The Core database system files are served
from flash storage. Cores analytics are embedded within
data and storage, enabling users to identify data that is
most valuable, see where it is stored and how it is being
accessed, as well as orchestrate for archiving, backing up
or deleting files.
Scale-out NAS meets data management, analytics and
storage resource management. Very innovative, is how
one judge described Qumulo Core.

GOLD

Another judge praised Qumulo Core for its great value


and innovative data utilization aware that scales to (support) billions of objects.
A Qumulo Core cluster scales from four nodes to more
than 1,000 nodes, creating a single file system and single
global namespace. Each node runs Qumulo Core and participates in the cluster as a fully symmetric peer, managing
read and write requests and coordinating transactions
with other nodes. Nodes get added non-disruptively for
linear scaling of storage capacity and storage performance.
Each Qumulo node is a modular building block that
comprises processing power, memory, networking, flash
and spinning disk with two 10 Gigabit Ethernet ports.
Core supports NFS and SMB file storage and features a
programmable REST-based object management API.
Qumulos software-as-a-service delivery model is sold
as an annual subscription. Entry pricing for a four-node
100 TB raw capacity Qumulo QC24 hybrid storage cluster
begins at $50,000. n

About us

STORAGE FEBRUARY 2016

24

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

SILVER

STORAGE SYSTEMS: DISK AND HYBRID SYSTEMS

Reduxio Systems Reduxio HX550


The Reduxio HX550 iSCSI hybrid flash storage array
is configured with patented cross-volume BackDating
technology. BackDating continuously records data across
multiple tiers and supports unlimited snapshots for recovery to any point in time.
Reduxio writeable clones are independent of the parent
volume. Multi-level cascading supports versioning.
Powered by the vendors TimeOS operating system,
Reduxio HX550 is rated to
deliver 50,000 IOPS with
sub-millisecond latency.
TimeOS provides insights
about how much space is
being consumed by data
protection, enabling users
to modify policies on the
fly.
Reduxio Systems combines flash storage for write IOPS
with global inline data compression and deduplication.
The array writes deduplicated data to flash and offloads
data to hard disk drives. Active data remains on SSDs.
Reduxio HX550 deduplicates data in 8K blocks in a
pre-memory buffer and applies a unique time stamp to
each block in its database. System metadata is kept in
a separate database, including log data that indicates
when blocks were written, along with synchronization
information.

The 2U hardware holds 24 hard disk drives or SAS-connected enterprise multilevel cell (eMLC) NAND flash
drives and tops out at 40 TB of raw capacity. The vendor
claims in-memory data reduction can boost effective
capacity to around 120 TB. Projected use cases include
midrange enterprises that run VMware virtual servers,
virtual desktop infrastructure (VDI), and line-of-business
applications for resource planning and customer relationship management.
Judges were impressed
by the way Reduxio optimizes and continuously
protects data, particularly
the ability to roll back to
any point-in-time snapshots. One evaluator wrote
that Reduxio was innovative for (its) use of metadata in a block storage system that
supports infinite snaps [and] intelligent tiering.
Another judge described the Reduxio HX550 as a
clever hybrid array that combines brilliant data protection with data reduction, granular storage tiering and
automated consistency.
Reduxio said its wizard-based installation takes about
10 minutes. The vendor claims its storage costs around $1
per usable GB with data reduction. Reduxio HX550 arrays
carry a three-year warranty. n
STORAGE FEBRUARY 2016

25

H
STORAGE
MAGAZINE

PRODUCTS

OF THE YEAR 2015

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names

HPE 3PAR StoreServ 8440 Storage


The hybrid HPE 3PAR StoreServ 8440 is the latest iteration of Hewlett Packard Enterprises converged system
that blends hybrid flash and traditional spinning disk.
StoreServ 8440 is the highest capacity model in Hewlett
Packard Enterprises (HPE) 3PAR StoreServ 8000 storage
area network (SAN) product family, which also includes
the 8200 and 8400 hybrid and the all-flash 8450 systems.

Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

BRONZE

STORAGE SYSTEMS: DISK AND HYBRID SYSTEMS

StoreServ 8440 arrays scale to 3 PB of raw storage with


a combination of solid-state drives and hard disks, with
support for block and file protocols.
Rated by the vendor to deliver more than 1 million IOPS
with sub-millisecond latency, the HPE StoreServ 8440
comes with 10 Ivy Bridge processor cores for high-performance storage. Although not built as a native flash system,
HPE designed the StoreServ 8440 array with features for

extending flash endurance and granular data reduction.


Our judges praised the HPE 8440 iSCSI arrays versatile
performance. The most solid all-around modern storage
array was the assessment of one evaluator, while another
judge hailed it as a very flexible midrange workhorse.
Other judges remarked on the HPEs competitive street
discounts.
The StoreServ 8440 supports consumer-grade multilevel cell NAND flash drives up in capacities up to 3.84
TB. Four controllers are configured in a mesh-active cluster for load balancing and high performance. Controller
ASICs automatically change the frequency of metadata
offloads from cache to flash media and also support inline
data deduplication.
The HP 3PAR operating system includes Adaptive Sparing technology to extend the life of flash media. Persistent
checksum on the ASIC ensures write integrity, while
Adaptive Flash Cache uses flash capacity to adaptively
extend system cache, accelerate workloads and improve
service levels.
HP Smart Start software taps intelligence embedded
in SAN components to automate installation. Users can
manage block, file and object access for up to 16 arrays
through the StoreServ Management Console.
Pricing for HPE 3PAR StoreServ 8440 hybrid system
starts at $31,290, which includes a configuration of six 300
GB HDDs and the 3PAR Operating System. n
STORAGE FEBRUARY 2016

26

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype

2015 Products of the Year finalists


BACKUP AND DISASTER RECOVERY
SOFTWARE AND SERVICES

Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS

 iolin Memory Inc. 7300/7700 Flash Storage


V
Platform

 ermabit Technology Corporation SANblox


P
1.2

 -IO ISE 800 Series G3 All Flash Array (ISE


X
800)

Actifio Sky Version 6.1.2

Artisan Neverfail IT Continuity Engine 7.1.1

 sigra Cloud Backup Version 13 Featuring


A
Docker Backup/Recovery and Asigra AWS
EBS Snapshot Manager

 rophetStor Data Services, Inc. Federator


P
SDS Suite

STORAGE SYSTEMS: DISK AND HYBRID SYSTEMS

Veritas InfoScale 7.0

Coho Data DataStream 1008h

Barracuda Backup

Virtual Instruments VirtualWisdom4.3

Data Direct Networks (DDN) GS7K

Druva inSync 5.5

DataGravity Inc. Discovery Series V2

Hitachi Data Instance Director v5

STORAGE SYSTEM SOFTWARE

Dell Storage SC4020 TLC array

HP StoreOnce Recovery Manager Central

Atlantis Computing Inc. USX 3.0

EMC Corp. VMAX3

Oracle Database Backup Service

HP ConvergedSystem 250-HC StoreVirtual

Veeam Availability Suite v8

 vere Systems Inc. Virtual FXT Edge Filer


A
version 4.5

Veritas NetBackup 7.7

Caringo Inc. FileFly

 ewlett Packard Enterprise Co. HP 3PAR


H
StoreServ 8440 Storage

Zerto Virtual Replication 4.0

 ataCore Software Corp. SANsymphonyD


V10 Adaptive Parallel I/O software

Hitachi Data Systems VSP G800

Infinidat Inc. InfiniBox F2000

FalconStor Software Inc. FreeStor

NexGen Storage QoS Manager for vCenter 6

 rcserve Unified Data Protection (UDP)


A
7000 Appliance Series

 edvig Inc. Hedvig Distributed Storage


H
Platform

Oracle Corp. ZFS Storage ZS4-4

Datto Inc. SIRIS 2

Qumulo, Inc. Qumulo Core

EMC Data Domain DD9500

 itachi Data Systems Corp. Hitachi Content


H
Platform (HCP) Anywhere

Panasas, Inc. ActiveStor 18

Reduxio Systems Reduxio HX550

ExaGrid EX32000E Appliance, Version 4.9

HGST Active Archive System SA-7000

BACKUP HARDWARE

IBM Corp. IBM Spectrum Scale 4.1.1

Infinio Systems Inc. Infinio Accelerator v2.0

 axta Inc. Maxta Storage Platform (MxSP)


M
3.1.1

HP StoreOnce 2900

NEC Corporation HYDRAstor 4.4

NetApp AltaVault

Quobyte Inc. Quobyte Storage System 1.2

Oracle Zero Data Loss Recovery Appliance

VMware Inc. VMware Virtual SAN 6.1

Quantum Artico NAS Storage Appliance

Zadara Storage Inc.: VPSA OPaaS 15.07

n
n

Unitrends Recovery-Series 936S

 eritas Technologies LLCs NetBackup 5330


V
Appliance

EMC Corp. XtremIO

 ridstore Inc.: All-Flash HyperConverged


G
Appliance (HCA)

HPE 3PAR StoreServ 20850

IBM FlashSystem V9000

Kaminario Inc. K2 v5.5

 ell Inc. Foglight for Storage Management


D
4.0

NetApp Inc. All-Flash FAS8000 (AFF 8000)

Pure Storage Inc. FlashArray m series

 ewlett-Packard OneView for VMware


H
vCenter 7.7

SolidFire SF9605

Tintri VMstore T5000 All-Flash series

 rocade Communications Systems AnalytB


ics Monitoring Platform

Data Dynamics Inc. StorageX 7.6

STORAGE SYSTEMS: ALL-FLASH SYSTEMS


n

STORAGE MANAGEMENT TOOLS

About us

I BM Corporation Spectrum Control Storage


Insights

Backup buyers
seek bargains

About the Storage magazine/


SearchStorage.com Products of the Year
Storage magazine and SearchStorage invited data storage product companies to
nominate new or enhanced products for
the 2015 Products of the Year awards. For
previously available products, the upgrade
must have incorporated significant new
features. Products could be entered in six
categories: backup and disaster recovery
software and services, backup hardware,
storage management tools, storage system
software, storage systems: all-flash systems and storage systems: disk and hybrid
systems. Products were judged by a panel
of users, analysts, consultants, and Storage magazine and SearchStorage editors.
Products were rated based on innovation,
performance, ease of integration into environment, ease of use and manageability,
functionality and value.

STORAGE FEBRUARY 2016

27

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage

Snapshot 1

Backup hardware buyers favor big vendors with capacity a priority

D Who did you choose for your most recent backup

hardware purchase, and who was on your short list?

2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS

Quantum

5%

Fujitsu

11

Hitachi

1 6%

Oracle

1 4%

most critical to your


backup hardware
purchase?*
49% Capacity

n PURCHASED
FROM
n SHORT LIST

Sepaton
(Hitachi)

DW
 hat features were

38% Scalability
35% Ease of implementation

11

32% Ease of management

Spectra Logic

3%

20% Deduplication

NetApp

7%

IBM

9%

HPE

12%

EMC

18%

Dell

19%

14% RTO demands

11%

11% Snapshots/replication

14%
20%

19% RPO demands


20%
26%

16% Backup reporting


16% Archive functionality
*UP TO THREE SELECTIONS ALLOWED

About us

DT
 he average number of TB backed up:

six hundred sixty one

SOURCE: TECHTARGET RESEARCH

STORAGE FEBRUARY 2016

28

HYPER-CONVERGENCE

Its a pervasive topic in data


centers these days that has intrigued storage and other
IT managers, but has also generated a lot of confusion
among usersconfusion that may be partly attributable to
hyper-converged vendors and their aggressive marketing.
The hyper-converged appliance (HCA) product category is large and very active with new vendors joining
and leaving the market, which can add to the confusion.
In the following, we describe what hyper-convergence
technology really is, and show how its not necessarily
synonymous with hyper-converged appliances and why
thats an important distinction to make. We also discuss
the differences among various HCA offerings and what
those differences mean in the course of an evaluation of
hyper-converged products.
HYPER-CONVERGENCE IS HOT.

An in-depth look
at hyper-converged
systems
Despite limitations, the fast deployment and
easy management of hyper-converged appliances
make them attractive.
BY ERIC SLACK

KOYA79/ISTOCK

CONVERGED VS. HYPER-CONVERGED


INFRASTRUCTURE

To properly understand hyper-convergence, we need to


start with converged infrastructures, the rack-based packages that combine existing storage, server and networking
components. These systems provide either a complete
storage and compute infrastructure at the rack level by
selling commercially available components bundled
as a single SKU (such as VCEs Vblocks) or a reference

HOME
STORAGE FEBRUARY 2016

29

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

architecture from which users or integrators can assemble a complete system (such as the Cisco/NetApp FlexPods).
The primary value of a converged system for customers is a reduction in deployment time. Convergence
essentially takes the integration out of building compute
infrastructures, simplifying the implementation of offthe-shelf components, or it might just remove the design
step as in the case of reference architectures, which still
require some assembly. The scale of the typical product
is fairly large as well, in terms of both physical size and
capacity, with many converged infrastructures taking up
most of a data center rack.
While converged systems bundle rack-level components into an integrated offering, hyper-converged infrastructures combine the compute, storage and networking
functions into the same chassis. These infrastructures can
be assembled from independent hardware and software
components by users or integrators, or companies can buy
a hyper-converged appliancea turnkey, single-vendor
product that includes comprehensive support.

The Open Storage


Platform for do-it-yourself hyper-convergence
BUYING AN APPLIANCE isnt the only way to get

the benefits of hyper-convergence. Many of the


vendors of software-defined storage are selling
those products in conjunction with the appropriate hardware components and enabling users to
create their own hyper-converged infrastructures.
While these arent hyper-converged appliances,
based on the criteria we used for this article, these
created hyper-converged infrastructures do offer
a way for users to get an economical, highly scalable storage system that can support a compute
function if desired.
These roll your own systems can be used
in many of the same environments as HCAs, but
theyre not turnkey products.
On the positive side, they offer more flexibility
and often cost less. But they require integration

INSIDE THE HYPER-CONVERGED APPLIANCE

by users or a third-party company, and can create

The hyper-converged technology category has become


quite popular and very crowded as there are multiple
ways to create a product that marries storage, compute
and networking functions (See: The Open Storage Platform). For our purposes, well focus on hyper-converged
appliances, the all-in-one products currently being sold by
a couple dozen vendors.

support confusion between hardware and software vendors.


One example is VMwares VSAN Ready Nodes,
which are validated configurations from qualified
hardware OEMs that support VSAN. n

STORAGE FEBRUARY 2016

30

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage

To be classified as a hyper-converged appliance, a product must meet the following criteria:


n

core, HCAs arent software-only products.


Single-vendor product.

Comprehensive management.

Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
n

Primary data
protection primer

Kato: New
approaches to
scale-out NAS

HCAs should have one part


number, require one purchase order and provide one
throat to choke support; an HCA vendor must provide
an end-to-end platform.

2015 Products
of the Year

Buffington: Mix
media for better
data protection

Hardware and software. While software is always at the

An HCA is a turnkey
product that, at a minimum, manages and simplifies
the setup and initial configuration of the product and,
ideally, provides simpler operation.

Storage federation. There must be a software layer that


virtualizes physical storage capacity in each node and
federates those nodes into a common management
point.
Hypervisor. HCAs include an embedded hypervisor for

running compute workloads and, typically, to run the


VM-based management and the storage software.

About us

EVO:RAIL HYPER-CONVERGED APPLIANCES

There are two general categories of HCA products: those


using VMWares EVO:RAIL software and those that use
proprietary software. VMware is the most common hypervisor platform in corporate data centers, and it enjoys this

same advantage in the HCA market as the vast majority of


hyper-converged systems support VMware. The company
developed a virtual SAN product (VSAN) to provide an
integrated scale-out storage product for VMware users.
EVO:RAIL is a combination of technologies that leverages the proliferation of the ESX hypervisor to create
a hyper-converged architecture thats easy for existing
hardware vendors to use to create HCAs by bundling the
software with their hardware.

WHILE SOFTWARE IS ALWAYS


AT THE CORE, HCAs ARENT
SOFTWARE-ONLY PRODUCTS.
EVO:RAIL uses the vCenter Server database, effectively
integrating EVO:RAIL management into vCenter Server
and simplifying HCA operations for users already familiar
with VMware. This integration also means that changes
made in vCenter Server will be reflected in EVO:RAIL
and vice versa. And, instead of being run as a VM, VSAN
is incorporated into the hypervisor kernel, potentially
making the storage process more efficient.
There are currently eight qualified EVO:RAIL hardware
partners: Dell, EMC, Fujitsu, Hitachi, NetApp, SuperMicro, Inspur and NetOne (the first six listed here are
covered in Evaluator Groups hyper-converged appliance
comparison research). EVO:RAIL uses a fixed configuration for CPUs, memory and storage resources that can
be implemented by their hardware partners. With very
STORAGE FEBRUARY 2016

31

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

few exceptions, EVO:RAIL-based offerings follow this


configuration:
n

Dual CPUs with a total of 12 or 16 cores

Up to 256 GB of memory

4 to 16 node clusters using a four-node server chassis

Up to 3.6 TB of hard disk capacity and 480 GB of flash

storage capacity per node


An EVO:RAIL product allows users to get an HCA from
an established company, since most of VMwares partners
are traditional storage vendors. Also, because its tightly
integrated with VMware, its an attractive choice for
VMware users. Compared with the other HCA products,
EVO:RAIL has limited configuration options and somewhat smaller scalability, although a 16-node configuration
can support a large percentage of the typical HCA use
cases. VMware is the only hypervisor supported and iSCSI
is the lone storage protocol. In terms of storage features
and functionality, EVO:RAIL doesnt support remote replication, deduplication, compression or multi-tenancy. This
adds up to very little product differentiation for hardware
vendors, with a few exceptions.
On the software side, Dell has recently partnered
with Nexenta, adding NexentaConnect software to its
EVO:RAIL offering. This brings inline deduplication and
compression as well as NFS and SMB file protocols to the
block-based iSCSI supported by VSAN. On the hardware
side, SuperMicro almost doubles the standard EVO:RAIL
per-node compute and storage resource limits and increases the maximum cluster size to 32 nodes.

PROPRIETARY HYPER-CONVERGED APPLIANCES

The most popular HCA products use proprietary software


packages that run on industry-standard server hardware,
but are sold as pre-configured modules. While they all
use Intel x86-based servers, most are customized to look
like purpose-built appliances. Hyper-converged appliance
vendors who fit this category include Atlantis Computing,
Dell (using Nutanix software), HPE, Maxta, Nutanix,
Scale Computing and SimpliVity, among others. Given
the activity in this dynamic product space, you can expect
several more vendors to join this list in the coming year.
At the core of these systems is their storage software
that provides the abstraction and federation layer required
to pool physical storage resources from multiple server
chassis and provide a centralized management function.
While not specifically required, the vast majority of HCA
products use a scale-out topology. Most systems support
iSCSI, but twoNutanix/Dell and Scale Computing
also provide file system protocols and Atlantis Computing
supports NFS exclusively.

MANY MODELS OF PROPRIETARY HCAs

One of the biggest differences between EVO:RAIL-based


HCAs and their proprietary counterparts is the number
and variety of configurations available. Since HCAs by
definition are pre-configured appliances, product flexibility is achieved by adding new models. Where EVO:RAIL
offers basically two configurations, some HCA vendors
have a half dozen or more models to choose from. This allows them to create products with different combinations
STORAGE FEBRUARY 2016

32

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

of compute, memory and storage resources, and different


scalability profiles. It can also add to the complexity of the
purchase decision.
For example, proprietary systems routinely provide
24 CPU cores and over 500 GB of RAM. Cluster configurations typically start at three nodes, and several have
no specified node limits at the top end. Per-node storage
capacities are routinely greater than 20 TB for hard drives
and well over 2 TB of flash for many models, more than
four times the typical EVO:RAIL configuration. While
the vast majority of storage configurations are hybrid, two
vendorsNutanix and Dell (using Nutanix software)
offer all-flash configurations. Atlantis Computing offers
only flash products and Scale Computing offers only hard
disk storage.

HYPER-CONVERGED APPLIANCE FEATURES

The exact feature sets of hyper-converged appliances


vary by product, of course, but common features for
proprietary HCAs include local and remote replication,
stretched- or metro-cluster support, deduplication, thin
provisioning, compression, encryption, snapshots, clones,
load balancing, quality of service (QoS), cloud-based
backup and recovery, and WAN optimization. Most products support VMware and have integrated at least some
management and control with vCenter. In addition, most
support VMware advanced features, such as vSphere
Distributed Resource Scheduler (DRS), vSphere High
Availability (HA), vSphere Storage APIs - Array Integration (VAAI), and so on.

HOW THE HYPER-CONVERGED


SCALING HCAs UP AND DOWN

APPLIANCE ROADMAP IS SHAPING UP

Hyper-converged appliances have enjoyed significant


success for remote office and branch office (ROBO) deployments as well as in smaller companies. Besides ease of
setup and operation, an important requirement for those
environments is the ability to start small. For example,
SimpliVity supports single-node clusters, although two
nodes are recommended, and other vendors are expected
to announce clusters of fewer than three nodes soon.
On the scale-up side, most HCAs simply add storage-heavy hardware configurations, but there are exceptions. HPE allows its HCA cluster to connect to its existing
scale-out storage system (StoreVirtual VSA) and Maxta
supports external, direct-attached storage (DAS) capacity.

The hyper-converged appliance product segment will


continue to grow in the coming year, with more vendors
joining the market. You should also expect some vendors
to exit, from both the EVO:RAIL and the proprietary
HCA categories. For example, HPE stopped selling its
EVO:RAIL product in 2015 to focus on its proprietary
technology, and a startup or two appear to be struggling
to stay in business. Storage scalability will be an important
characteristic with more vendors introducing storage-only
node options and with more products supporting external
storage systems or devices. At the same time, expect to see
more two-node configurations being released to address
the ROBO and small office use cases.
STORAGE FEBRUARY 2016

33

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names

Its also likely that EVO:RAIL vendors will strive to


differentiate their products, with larger capacity configurations and additional features. The wide variety of models
in the proprietary category will continue as vendors cast
about for new configurations to address new markets.
That said; dont look for HCAs to expand into the traditional data center, as much as the HCA vendors might
like that to happen. The cost and inflexibility of turnkey
appliances make an HCA a less than an ideal alternative
in those environments. But new use cases will arise, such

as private and hybrid clouds, and special projects like


Hadoop and big data analytics that drive the purchase of
new infrastructures. HCAs may prove to be appropriate
for some of those use cases, but the Open Storage Platform
may provide more compelling storage platforms for those
scenarios. n
is a senior analyst with Evaluator Group, where he focuses
on scale-out architectures, virtual SAN, software-defined storage and
hyper-convergence, as well as traditional storage and data protection.

ERIC SLACK

Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

STORAGE FEBRUARY 2016

34

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage

Snapshot 2
Backup buyers lean on vendors to bring home bargains

D How much of a discount did you get

on your backup hardware purchase?

D What was the most significant

concession your vendor made?

2015 Products
of the Year
Backup shoppers
back big names

No
discount

1%-10%

18%

13%

21%

Extra discount
25%-50%

8%

8%

Hyper-converged
systems prove
more than hype

Dont know

42%

Backup buyers
seek bargains

7%
Free extra year of maintenance

5%

Primary data
protection primer

Enhanced service level for free


11%-24%

Buffington: Mix
media for better
data protection

5%

11%
50%+

Added to product for free

5%

Kato: New
approaches to
scale-out NAS
About us

Free professional services

3%
Creative payment terms

D The average price paid

for backup hardware purchase:

246,878

SOURCE: TECHTARGET RESEARCH

1%
Honored expired discount promotion

19%
None of the above

STORAGE FEBRUARY 2016

35

PROTECT

IT PLANNERS TEND to focus on performance and scale when

selecting a primary storage system, but another important consideration is data protection. But data protection
is no longer the sole domain of the backup process.
Modern primary storage systems can go a long way
to protecting themselves with built-in data protection
systems that potentially replace backup or at least lighten
its load. For IT professionals, the key is to understand the
capabilities of these features and decide just how much of
the backup load they would like primary storage to carry.

Built-in data
protection for
primary storage
Snapshots, mirroring, replication and erasure
coding can all help keep data safelearn how
and when to use them.
BY GEORGE CRUMP
LUCY2014/ISTOCK

PROTECTION FROM MEDIA FAILURES

Media failure protection has been available on storage


arrays since their inception. The goal of media protection
is to keep data available if an individual storage device
fails. But all media protection comes at a cost. The first
cost is capacity overhead: how much additional storage
capacity is required to maintain protection. At first, all
media protection required mirroring, which has a 100%
capacity overhead. RAID protectionRAID 3, RAID 4 and
RAID 5soon followed, which required less capacity for
protection but exacted another cost: procession power.
RAID protection calculates parity to provide data redundancy, and that calculation consumes compute resources.
The most sophisticated data protection systems consume

HOME
STORAGE FEBRUARY 2016

36

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names

the most compute resources.


The final cost of protecting from media failure is measured by the time to return to a protected state. If there
is a media failure, a mirrored arrangement can return to
a protected state quickly. A RAID setup, because it has to
recalculate all that parity, can take longer. As drive capacities increase, so does the time that it takes to recover from
a drive failure. An array configured with 6 TB drives with
a basic RAID implementation can take days to return a
system to a fully protected state.

A RAID Reader
For more information on the RAID alternatives
available in most storage systems:
READ: Uncover the right RAID recovery service
LISTEN: Examining RAID levels: RAID 0 through

RAID 6
DOWNLOAD: RAID level comparison chart: A free

download
READ: RAID types/levels and benefits explained

Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

FAILURE PROTECTION FALLS SHORT

Its clear that media failure protection has to improve.


Recently, some vendors have introduced advanced RAID
controllers that dont have to read the entire drive to recover data when doing a drive rebuild; they only need to
rebuild the data that was actually on that drive. Considering that most drives run at about 35% of their capacity,
intelligent RAID should reduce recovery times by 60%
or more.
An alternative to advanced RAID recovery is erasure
coding, which uses parity-based data protection systems
similar to RAID. Erasure coding is typically used in scaleout storage environments, and is built from a cluster of
storage nodes. It provides better granularity than RAID
and writes both data and parity across the storage nodes.
The advantage from a failure perspective is that all the
nodes in the storage cluster can participate in the replacement of a failed node. As a result, the rebuilding process
does not become as CPU-constrained as it might in a

traditional storage array with RAID.


For scale-out storage, an alternative to erasure coding is
replication. Essentially, data from one node is mirrored to
another node or to multiple nodes. Replication is simpler
to calculate than erasure codes, but it does consume at
least twice the capacity of the protected data.

PROTECTION FROM DATA CORRUPTION

RAID protects from media failure but will not help an


organization recover from data corruption caused by a
user or software error. But with modern primary storage
systems, administrators dont have to rely on recovering
corrupted data from a recent backup if one of those errors
occurs. Instead, they can leverage snapshots. On an array,
the physical location of data is mapped to a table. When
STORAGE FEBRUARY 2016

37

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype

theres a request for data, the table determines the correct


location of the data and routes the request appropriately.
A snapshot, instead of making a copy of the data, makes a
copy of the table. Data associated with the copied version
of that table is then frozen or set to read-only for as long
as the copied version of the table exists.
A snapshot creates a point-in-time copy of the data
without actually copying data. Data growth only occurs
when updates are made to data under snapshot or when
new data is added. When this occurs, the original table
is also updated. The application uses the original table
to gain access to the live data set. The other process, like
backup or replication, uses the snapshot table to access
the point-in-time copy of the data.

Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

Erasure Coding Revealed

Snapshot technology has been available on storage systems for more than a decade but over the last few years,
it has become among the most valuable data protection
systems. First, most storage systems can track hundreds
of snapshots without any major impact on performance.
Second, most snapshot features within storage systems
have the ability to interface directly with applications such
as Oracle and Microsoft SQL Server to capture a clean
copy of data while the snapshot is occurring. These two
advances mean that snapshots can be taken frequently and
stored for fairly long periods of time with the assurance
that the data they store is usable for recovery.
Snapshots are valuable when there is data corruption or
an accidental deletion. In those cases, the snapshot can be
mounted and data copied back to the production volume,
or the snapshot can simply replace the existing volume.
In both scenarios, data loss is minimal and time to recover
is almost instant.

Stay on top of developments related to erasure


coding technology:
READ: As users search for RAID alternatives,

erasure coding returns


LISTEN: Data importance, resilience, durability

determine when to use erasure coding


WATCH: Toigo: Erasure coding as prevention

for bit rot


READ: RAID systems yield to erasure coding

for data protection

PROTECTION FROM STORAGE SYSTEM FAILURE

The type of failure that used to force a recovery from the


backup softwares data was a failure of the storage system
itself, caused by multiple drive failures, a bug in the storage software or firmware, or some other crippling event.
Now, data centers can leverage replication technology
that builds on top of snapshots to deliver protection from
a storage system failure.
Snapshot replication leverages the snapshots granular
understanding of data, and only copies changed blocks
of data from the primary storage system to a secondary
STORAGE FEBRUARY 2016

38

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

storage system. Typically, snapshot replication is used to


create an off-site disaster copy but the reality is that most
disasters are not data center-wide; they often involve just
one critical server that has failed. Snapshot replication can
be used to replicate data to a secondary storage system that
may also be on-site. This secondary storage system can be
used as a recovery point if the primary storage system fails.
Of course, snapshot replication canand shouldstill be
used to update a third storage system off-site.

THE VALUE OF HAVING


A SECONDARY STORAGE
SYSTEM ON-SITE CANNOT
BE OVERSTATED.
A common concern about this approach is cost. At
one time, the target replication system had to match the
originating system, but now most storage system vendors
allow replicating snapshots to a lower-cost storage system
within their product portfolio. In other words, a data
center can replicate from a tier 1 storage system to a tier
2 storage system. Alternatively, a third-party replication
application or a software-defined storage product can be
used to replicate from any storage system to any other
storage system.
Another alternative is to leverage inline deduplication
to create a full copy of the data with zero data capacity
growth. With products that provide that capability, a copy
of the data is made and is deduplicated as it is copied.

In other words, no data is written since the data already


exists; only deduplication metadata is updated. These deduplicated copies can be more useful than snapshots but
are still exposed to metadata vulnerabilities.
The value of having a secondary storage system on-site
cannot be overstated. It protects the data center from one
of the worst disasters possiblea primary storage system
failure. The secondary system can be used to feed other
processes like analytics, reporting and, of course, the
backup process itself.

DATA CENTER FAILURE

Minor disasters like application data corruption or storage


system failure are far more common, but a full-site disaster tends to capture the most attention. While most IT professionals will never experience a site disaster firsthand,
the consequences of a site loss are so severe that a data
center must have a disaster recovery plan. Again, as with
the other failure scenarios, there are multiple DR options.
The first is leveraging snapshot replication described
above to replicate data to a secondary site. The problem
is that the costs associate with equipping, powering and
staffing a secondary site can be overwhelming, especially
when you consider the chances of needing the secondary
site are relatively slim.
For larger organizations that already have a secondary
data center, the costs may be manageable. Many businesses, however, dont have secondary sites, so they should
consider the measured use of cloud services for disaster
recovery. Measured use means that the cloud service
STORAGE FEBRUARY 2016

39

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year

should be used to store only the most recent copies of


data. The recent data copies are the ones most likely to be
needed in the event of a major disaster. There are replication and cloud backup products and services that provide
the ability only to keep the latest versions of data copied
in the cloud.
The cloud can then be used to instantiate application
images in the providers cloud. The result is a rapid (not
instantaneous) recovery in the event a data center loss.

Backup shoppers
back big names

systems dont add up to a total answer for adequate data


protection. While most storage systems can provide more
frequent data copies and more rapid data recoveries, at
some point, storage managers should make sure that a
copy of data that is entirely separate from the primary
storage system is made and safely tucked away. In addition, IT planners need to consider that increasingly disasters are caused by outside intrusions such as ransomware,
so its important that data centers have a disconnected,
off-line copy of data in addition to their secondary online
copies. n

READY FOR FAILURE


Hyper-converged
systems prove
more than hype

Primary storage can better protect itself from data failures than ever before, but those internal data protection

GEORGE CRUMP is president of Storage Switzerland, an IT analyst firm


focused on storage and virtualization.

Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

STORAGE FEBRUARY 2016

40

HOT SPOTS
JASON BUFFINGTON
Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names

Disk, tape, or
cloud? Or, all
of the above?
Mix and match data protection media
to better meet your companys needs.

Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

going on about data protection methods right now, and it centers on disk, tape or cloud. Of
course, cloud is a delivery mechanism, not a medium, and
cloud providers are using disk and near-line tape, too.
But what kinds of media and architectures are organizations really using for backup? According to ESG
research, the most common approach continues to be
disk-to-disk-to-tape (D2D2T). In other words, backing
up from primary disk; then leveraging secondary disk for
efficient deduplication and fast, granular recovery; and
finally employing tertiary tape for long-term retention.
Twenty-six percent of the organizations surveyed by ESG
said they use this data protection method.
Another 16% go a different route: disk-to-disk-to-cloud
(D2D2C). They make sure theyll have rapid/granular
recovery if needed via the secondary disk, and they send
THERES A DEBATE

data to the cloud for tertiary retention as well.


The tradeoff between the two data protection methods
is typically related to regulatory compliancethe amount
of data an organization must retain for long periods will
likely influence its cloud vs. tape long-term retention
decisions.
Still another 17% opt for disk-to-tape-to-tapetape for
on-site recovery, with some tapes going off-site for retention and BC/DR preparedness. Interestingly, it appears
the second most common method of data protection in
use today (at least among the respondents surveyed) is an
all-tape system.
However, the remainder of the surveyed organizations
went with all-disk data protection methods:
n

Disk to disk for fast on-premises protection but

no tertiary or off-premises retention (14%)


Disk to disk across a WAN to another disk (11%)
n Disk across a WAN to another disk, but making
centralized backups of branch/remote offices back
to the main data center (6%)
n Disk to tapei.e., traditional tape backups (6%)
n Disk to cloud (4%)
n

A case could be made for any of these data protection


methods. And thats the point. An organizations recovery
goals, retention requirements and economic priorities all
STORAGE FEBRUARY 2016

41

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer

should factor into the decision.


I love cloud backups of remote office data, though,
depending on the organizations recovery needs, I can
see sending the data back to headquarters instead. I also
love cloud-based data protection for endpoint devices, as
long as IT oversees the process. And I think the cloud is
compelling to support BC/DR, where the organization not
only uses the cloud for storage, but also leverages cloudbased compute to fail over if needed.
And Im a fan of tape, especially for organizations in
retention-regulated industries and those storing data for
more than two years.
Still, disk ought to be the primary means of protection
and recovery. Thats because it is difficult (if not impossible) to meet todays SLAs without having a fast on-site
copy to recover from.
ESG research tells a similar story:
n

first tier of recovery. That number should rise somewhat


in the future.

Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS

Seventy-three percent of organizations use disk as their

However,

69% of organizations use something other


than disk in their overall strategy. Nearly half (49%) use

tape and 20% are using the cloud in some fashion for
data protection.
n

Twenty-three

percent of the surveyed organizations


exclusively use tape, and 4% exclusively use cloud. That
number of those using tape exclusively may recede
somewhat, and the number of those using cloud exclusively may risebut not by much.

I expect some organizations that use tape exclusively


will evolve to a disk-plus-tape model for efficiency and
to help them meet tight SLAs. But those same goals will
likely keep pure-cloud platforms from taking over. Disk,
tape and cloud data protection methods all have their
place in supporting rapid recovery, reliable retention and
data agility/survivability (respectively).
If those are your goals, then perhaps all three approaches/media types should be part of your data protection strategy. n
is a senior analyst at Enterprise Strategy
Group. He focuses primarily on data protection, as well as Windows
Server infrastructure, management and virtualization. He blogs at
CentralizedBackup.com and tweets as @Jbuff.

JASON BUFFINGTON

About us

STORAGE FEBRUARY 2016

42

READ/WRITE
JEFF KATO
Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

New approaches
to building
scale-out NAS
New design techniques are ushering
in new scale-out NAS systems that rival
object storage.

LIFE WAS MUCH easier in the 90s. We had block storage and

file storage, and each had their place: block for highly
transactional data and file for unstructured and departmental storage.
By the end of the 90s, network-attached storage (NAS)
design improved performance enough to make it suitable
for running Oracle databases. Administrators preferred
easy-to-manage file storage to complex block storage with
dedicated Fibre Channel SAN switches.
But with the turn of the century, new technologies
for storage devices and architectures multiplied. Unified
storage emerged, combining block and file storage. First
generation multi-node scale-out NAS also emerged,
improving scalability but compromising small file performance. When scale-out NAS design couldnt keep

pace with Web-scale requirements, object storage was


developed, adding global scale-out but relinquishing easy
file access.
Instead of redesigning legacy scale-out NAS products,
most of the industry continues to focus on object storage.
Unfortunately, object-based storage doesnt provide the
high-performance, enterprise-grade, POSIX-compliant
file access that thousands of legacy applications require. It
also fails to provide a performance level that can meet the
requirements of many big data workloads like media and
entertainment, life sciences and commercial HPC. Object
storage companies are trying to address these issues by
adding file gateway accelerators in front of object-based
back-ends. But that approach adds another layer of complexity, which leaves the door open for a modernized
enterprise-capable, scale-out NAS design that can meet
enterprise performance requirements and scale as well
as object storage.

IDEAL SCALE-OUT NAS DESIGN PRINCIPLE

If we had a clean design sheet for a modern modular


scalable NAS design, it would likely include the following
attributes:
No other technology in the last
10 years has done more to challenge the paradigm of

Flash-first design.

STORAGE FEBRUARY 2016

43

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS
About us

traditional storage designs than solid-state storage. When


flash-based drives first appeared, they were an expensive
luxury and used sparingly. Now flash is ubiquitous. Flash
enables many new storage attributes including enhanced
metadata, removal of battery-backed cache, deduplication
performance and data-tiering that actually makes sense.
Metadata is no longer
limited to saving precious persistent memory space or
IOPS constrained by spinning media. New storage devices
should expand metadata approaches and give storage an
extra boost of brains. For scale-out NAS, this enables the
device to become data-aware and provides a rich set of
real-time analytics at scale. Time- and performance-consuming file system tree walks, metadata scans and file
system lookups are eliminated as metadata aggregates are
updated and stored in real-time.
n Data-aware enhanced metadata.

Massive scalability without compromising perfor-

mance. Object storage vendors have said that you cannot

use NAS for big data because it doesnt scale. Until now,
they were right. Traditional scale-out NAS becomes
bogged down at hundreds of millions of files, which leans
toward workloads of very large files and leaves high-performance NAS to NetApp and EMC. With flash-first
design and enhanced metadata techniques, modern scaleout NAS should be able to scale to tens of billions of files (a
>100X improvement) with uncompromised performance
for both large and small files.
n

Software-defined design.

Since flash-first design

eliminates special hardware requirements, scale-out


NAS software should be portable and able to run on
industry-standard servers. This allows the storage to be
deployed on the latest hyper-scale architectures or even
run on a virtual machine in a public cloud. This approach
ensures that scale-out NAS can be deployed using the
same hardware and economics as object storage making
the product extremely cost-effective.

SCALE-OUT NAS SOFTWARE


SHOULD BE PORTABLE AND
ABLE TO RUN ON INDUSTRYSTANDARD SERVERS.
While modernizing NAS
design, we should go to the extreme and deliver software
as you would modern cloud applications. Software as a service builds an invisible and perpetual upgrade process into
the product. It eliminates the tedious qualification effort
driven by old architectures where quality had to be baked
in with months of testing. While enterprise customers still
require strict control and acceptance of new software releases, its time to adopt a more agile development model
for storage products.
n SaaS software delivery model.

Open APIs. Open APIs are required to unlock the value


of the advanced data services, analytics and metadata capabilities. Making features easily programmable and controllable using a modern RESTful approach will allow this
n

STORAGE FEBRUARY 2016

44

Home
Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS

next generation of scale-out NAS to be easily be integrated


into both legacy and future cloud-centric environments.

EMERGING VENDORS WITH


MODERN SCALE-OUT NAS DESIGN

While most new scale-out storage companies are building


products based on grid-based object-storage design principles, a few are tackling the arguably harder problem of
enterprise performance-grade scale-out NAS design.
Qumulo was founded by many of the original inventors
of Isilon OneFS. The focus of Qumulo technology was to
bring massive scalability at uncompromised performance.
In addition to breakthrough scalability, Qumulo focuses
on making data visible through real-time capacity and
performance analytics that make management of petabytes of storage a breeze. Qumulo is focused on HPC and
large-scale unstructured data workloads in media and
entertainment, life sciences, higher education, oil and
gas, and others.
InterModal Data was founded by former executives
of NetApp, Sun and other legacy storage leaders. The
company delivers scalable, flexible and efficient scale-out
storage for enterprises through distributed system software on top of disaggregated hardware architecture. Their

approach physically separates and logically connects I/O


nodes from capacity nodes over Ethernet, using RAM and
flash caching on every node.
Scality initially started out focusing on object storage
technology, but has now expanded to address the requirement for scale-out NAS that coexists seamlessly with
object storage. The company delivers scalable, flexible
and efficient scale-out storage for enterprises via a software-only approach tunable for both high performance
workloads and cost-effective archive storage.

NEW LIFE FOR NAS

Its been well over a decade since major changes have


appeared in the scale-out NAS product category, so its
refreshing to see these companies take a fundamentally
new approach. If they can radically improve scale and performance while adding a rich set of analytical capabilities,
many customers will be drawn to these products just as
easy file access combined with enterprise performance
attracted users in the 90s. n
JEFF KATO is a senior storage analyst at Taneja Group with a focus
on converged and hyper-converged infrastructure and primary
storage.

About us

STORAGE FEBRUARY 2016

45

Home

TechTarget Storage Media Group

Castagna:
Storage predictions,
real and surreal
Toigo: Defining
software-defined
storage
2015 Products
of the Year
Backup shoppers
back big names
Hyper-converged
systems prove
more than hype
Backup buyers
seek bargains
Primary data
protection primer
Buffington: Mix
media for better
data protection
Kato: New
approaches to
scale-out NAS

STORAGE MAGAZINE
VP EDITORIAL Rich Castagna
SENIOR MANAGING EDITOR Ed Hannan
CONTRIBUTING EDITORS James Damoulakis, Steve Duplessie,
Jacob Gsoedl
DIRECTOR OF ONLINE DESIGN Linda Koury
SEARCHSTORAGE.COM
SEARCHCLOUDSTORAGE.COM
SEARCHVIRTUALSTORAGE.COM
SENIOR NEWS DIRECTOR Dave Raffo
SENIOR NEWS WRITER Sonia R. Lelii
SENIOR WRITER Carol Sliwa
STAFF WRITER Garry Kranz
SITE EDITOR Sarah Wilson
ASSISTANT SITE EDITOR Erin Sullivan
SEARCHDATABACKUP.COM
SEARCHDISASTERRECOVERY.COM
SEARCHSMBSTORAGE.COM
SEARCHSOLIDSTATESTORAGE.COM
SENIOR MANAGING EDITOR Ed Hannan
STAFF WRITER Garry Kranz
SITE EDITOR Paul Crocetti

About us

STORAGE DECISIONS TECHTARGET CONFERENCES


EDITORIAL EXPERT COMMUNITY COORDINATOR Kaitlin Herbert

SUBSCRIPTIONS
www.SearchStorage.com

STORAGE MAGAZINE
275 Grove Street, Newton, MA 02466
editor@storagemagazine.com

TECHTARGET INC.
275 Grove Street, Newton, MA 02466
www.techtarget.com

2016 TechTarget Inc. No part of this publication may be transmitted or reproduced


in any form or by any means without written permission from the publisher.
TechTarget reprints are available through The YGS Group.
About TechTarget: TechTarget publishes media for information technology
professionals. More than 100 focused websites enable quick access to a deep store
of news, advice and analysis about the technologies, products and processes crucial
to your job. Our live and virtual events give you direct access to independent expert
commentary and advice. At IT Knowledge Exchange, our social community, you can
get advice and share solutions with peers and experts.

COVER IMAGE AND PAGE 8: XXXXX

Stay connected! Follow @SearchStorageTT today.

STORAGE FEBRUARY 2016

46

Das könnte Ihnen auch gefallen