Sie sind auf Seite 1von 26

MI

Modern Infrastructure

M I Modern Infrastructure Creating tomorrow’s data centers Home Editor’s Letter Automate or Else Automate or

Creating tomorrow’s data centers

Home

Editor’s Letter Automate or Else Automate or Else Survey Says It’s not easy, but nothing
Editor’s Letter
Automate or Else
Automate or Else
Survey Says
It’s not easy, but nothing worthwhile ever is.
GPU Virtualization
When Two
Hypervisors Are
Better Than One
Five New Devices
Leaf-Spine
Architecture
How to Get Better
Data Center Uptime
AWS re:Invent 2013
Plankers:

Less Is More

SURVEY SAYS

Eunice: The End

Mobility

of Cloud Nirvana

Madden: Better

Late Than Never

FIRST LOOK

New Devices

for 2014

OVERHEARD

AWS re:Invent

2013

EDITOR’S LETTER

2014: The Year of Tape Storage (Really!)

DATA CENTER PERFORMANCE

How to Get Better Uptime

EXPLAINED

DATA CENTER PERFORMANCE How to Get Better Uptime EXPLAINED Leaf-Spine Architecture SERVER VIRTUALIZATION When Two

Leaf-Spine

Architecture

SERVER VIRTUALIZATION

When Two Hypervisors Are Better Than One

FIRST LOOK

GPU Virtualization

FIRST LOOK GPU Virtualization

COMMENTARY

Less is More

The End of Cloud Nirvana

Better Late Than Never

JANUARY 2014, VOL. 3, NO. 1

COMMENTARY • Less is More • The End of Cloud Nirvana • Better Late Than Never
COMMENTARY • Less is More • The End of Cloud Nirvana • Better Late Than Never
Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

EDITOR’S LETTER

Nirvana Madden: Better Late Than Never EDITOR’S LETTER The Year of Tape Storage (Really!) I’M NOT

The Year of Tape Storage (Really!)

I’M NOT A big fan of prediction stories. They’re usually bor- ing and almost always wrong. With that said, I’m going to make a prediction. This year, we’ll see a resurgence in the use of enterprise-class tape library systems. Let me explain. Amazon Web Services has competition for most of its services, with one notable exception: Gla- cier, the ultra-low-cost object storage service priced at a penny per gigabyte per month. “Organizations that claimed they would never use the cloud for security reasons tell us, ‘At a penny per gig per month, I don’t have a compliance problem anymore,’” said Rick Faris, director of product management at Riverbed, whose customers use Glacier combined with its Whitewa- ter cloud storage appliance. But Glacier drawbacks such as five-hour retrieval times and penalties for retrieving large amounts of data leave room for competition on price and lack of restrictions.

Thus far, that’s been hard for other cloud providers. Amazon S3, Microsoft Azure and Google Compute Engine charge around 8 cents per gigabyte per month for their disk-based object storage services. The now-defunct Nir- vanix cloud storage service charged as much as 18 cents per gigabyte per month. Using tape as the basis of a cloud object store could change all that. This fall, tape library manufacturer Spec- tra Logic introduced Black Pearl, a front end that provides a RESTful object storage interface to the tape library. The appliance also contains a solid-state cache for near- real-time access to recently stored data, and it promises retrieval times of no more than two minutes. Yahoo is already beta-testing the product. They’re ex- ploring ways to move away from traditional file-system backups while tapping tape’s low cost and reliability, said Kevin Graham, Yahoo principal storage architect. “The physical properties of tape are fantastic—it has near-zero Opex, it has a good bit error rate and it’s cheap,” Graham said. Yahoo’s initial plan is to use Black Pearl in- ternally, but Graham doesn’t rule out opening up its tape infrastructure to outsiders down the road. “We’re going to start small,” he said, “but my hope is that we can open it up and turn it in to a data center platform service.” For the sake of my prediction, I hope it succeeds! n

Alex Barrett, Editor in Chief

IT OPERATIONS

IT OPERATIONS The New Automation Imperative The instinct to automate IT is anything but automatic. But

The New

Automation

IT OPERATIONS The New Automation Imperative The instinct to automate IT is anything but automatic. But
IT OPERATIONS The New Automation Imperative The instinct to automate IT is anything but automatic. But

Imperative

The instinct to automate IT is anything but automatic. But it’s becoming a necessity.

is anything but automatic. But it’s becoming a necessity. BY ALEX BARRETT IT’S HARD TO argue
is anything but automatic. But it’s becoming a necessity. BY ALEX BARRETT IT’S HARD TO argue

BY ALEX BARRETT

automatic. But it’s becoming a necessity. BY ALEX BARRETT IT’S HARD TO argue with the wisdom
automatic. But it’s becoming a necessity. BY ALEX BARRETT IT’S HARD TO argue with the wisdom
automatic. But it’s becoming a necessity. BY ALEX BARRETT IT’S HARD TO argue with the wisdom
automatic. But it’s becoming a necessity. BY ALEX BARRETT IT’S HARD TO argue with the wisdom
automatic. But it’s becoming a necessity. BY ALEX BARRETT IT’S HARD TO argue with the wisdom
automatic. But it’s becoming a necessity. BY ALEX BARRETT IT’S HARD TO argue with the wisdom

IT’S HARD TO argue with the wisdom of automation. With it, IT professionals can eliminate time-consuming and error-prone processes, improve uptime and customer sat- isfaction, impose standards and even save money. What’s not to love? Well, for starters, there’s cost. Then there’s complexity. Then there’s the battle between legacy and modern auto- mation tools. Then there’s the learning curve. Then there’s the acceptance curve. Last but not least, there’s the small concern of automating oneself out of a job. “To a certain extent, everything we do in IT is a form of automation,” said Glenn O’Donnell, principal analyst at Forrester Research. “And yet, we resist automation because we want to remain in control.” Or maybe IT resists automation because it’s hard. “Automation is always a great idea, until someone real- izes it’s super complicated,” said Robert Green, principal cloud strategist at Enfinitum Consulting, which uses automation to create cloud environments. But whether it’s fueled by sluggish budgets, the rise of cloud computing, or the Agile and DevOps movements, interest in IT automation is at an all-time high, experts say. “IT is being asked to do more things faster and in shorter windows,” said Ronni Colville, a Gartner vice

HOME MODERN INFRASTRUCTURE • JANUARY 2014 3
HOME
MODERN INFRASTRUCTURE • JANUARY 2014
3
Home Home Editor’s Letter Editor’s Letter Automate or Else Automate or Else Survey Says Survey
Home
Home
Editor’s Letter
Editor’s Letter
Automate or Else
Automate or Else
Survey Says
Survey Says
GPU Virtualization
GPU Virtualization
When Two
When Two
Hypervisors Are
Hypervisors Are
Better Than One
Better Than One
Five New Devices
Five New Devices
Leaf-Spine
Leaf-Spine
Architecture
Architecture
How to Get Better
How to Get Better

Data Center Uptime

Data Center Uptime

AWS re:Invent 2013

AWS re:Invent 2013

Plankers:

Plankers:

Less Is More

Less Is More

Eunice: The End

Eunice: The End

of Cloud Nirvana

of Cloud Nirvana

Madden: Better

Madden: Better

Late Than Never

Late Than Never

president and distinguished analyst for IT operations management. “Automation is what everybody’s seeking right now, and we’re seeing a big increase in the number of projects.”

THE AUTOMATION MATURITY CURVE

What organizations are automating today depends on what they have already automated, and how much expe- rience they have with the tools and technologies. “Automation is more of an evolution of trust between vendors and implementers than a revolution of tech- nology,” said Forrester’s O’Donnell. In the long list of frequently automated IT processes, for example, patch automation is seen as low-hanging fruit. “No one in their right mind does that manually any- more,” he said. Beyond that, which processes get automated differs depending on the company, its resources and personnel. One large automotive insurer is slowly but surely auto- mating key IT processes using Microsoft System Center Orchestrator. The IT staffer who supervises the monitor- ing project is starting with alerts generated by the firm’s BlueStripe performance monitoring tool. The goal is to take network operators out of the loop as much as possible. “In the past, an alert would happen, and would contact the application owner at 3 a.m., who would then spend 30 minutes restarting a Web service,” the monitoring supervisor said. Today, that process has been automated such that within 120 seconds, the problem is detected, an action is performed and the problem is resolved—without ever having to wake the application’s owner.

But while automating faults and events has been a suc- cess, the monitoring supervisor worries about alienating his peers. “The [automation] tools are in good shape, but it’s harder to get the IT teams on board.” Not everyone has welcomed the sudden surge in emails they receive thanks to the newly automated system. So he’s taking it slow, trying not to annoy colleagues. “I don’t want everything I do turned into [a coworker’s] email rule that says ‘Delete everything this person sends me,’” he said.

IN THE LONG LIST OF FREQUENTLY AUTOMATED IT PROCESSES, PATCH AUTOMATION IS SEEN AS LOW-HANGING FRUIT.

Others, in contrast, see extending their automation know-how as a key way to curry favor with other IT teams. The engineering architect for a global financial services company said that the focus of automation in his shop has moved beyond simple infrastructure provisioning and patching toward higher value processes such as closed- loop remediation and building out private cloud environ- ments for the test/dev and quality-assurance teams. “It’s really about helping to speed up the development teams so that they can be more agile,” the engineering architect said. To that end, the firm uses a bevy of tools from Hewl- ett-Packard, including the old Opsware technologies now

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

known as HP Server Automation and HP Operations Orchestration. Indeed, provisioning cloud environments—public or private—is an increasingly popular form of automation, said Gartner’s Colville. That’s the thrust of many con- temporary automation and orchestration platforms on the market, including offerings from traditional vendors such as VMware with vCloud Automation Center and BMC with Cloud Lifecycle Management, or those from newer players such as RightScale, Dell Enstratius and ServiceMesh, recently acquired by CSC. “Event management is where it all got started, but these days it’s all about cloud and self-service provisioning,” Colville said.

SCALE UP, SCALE DOWN

There’s also a significant opportunity to use automation not only to build out repeatable cloud environments but to take advantage of cloud’s scalability and elasticity. Enfinitum’s Green described a recent project in which he moved a claims processing application for a large U.S. insurance company from a managed services provider to Amazon Web Services’ public cloud, saving the company $750,000 per year. The savings came in large part from being able to shrink the environment substantially, from 35 servers at the MSP to a base of just 11 servers on AWS, Green explained. That environment is now auto-scaled up in response to client load using monitoring and orchestration software from ScaleXtreme. In one test, Green threw 1,000 claims at the system to

see how it would respond. “The system started at 11 serv- ers, and we watched it move to 20, then 25, and all the way up to 60,” he said. “As claims popped off the stack, the number of servers went all the way back down.”

PROVISIONING CLOUD ENVIRONMENTS—PUBLIC OR PRIVATE—IS AN INCREASINGLY POPULAR FORM OF AUTOMATION.

Armed with simple-to-use automation tools, a lot more organizations would right-size applications according to demand, said Nand Mulchandani, ScaleXtreme co- founder and CEO. For example, now that test and dev environments are increasingly in the public cloud, users should find ways to automate the tear-down of those en- vironments, he said. “Before, people didn’t turn stuff off, but now that you pay by the hour, you have every incentive to turn it off,” Mulchandani said.

A CROWDED IT AUTOMATION TOOLBOX

If you find yourself nodding your head at this vision, the next question is: How do you get there? What tools do you need? There are two schools of thought when it comes to IT automation tools: adopt a big automation framework and

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

augment it as needed with point tools, or rely heavily on low-cost/free open source automation tools to cobble to- gether your IT systems. Forrester’s O’Donnell recommends that organizations align with one of the large automation “anchor vendors.” At the same time, it’s not possible to standardize com- pletely, in which case, you can augment with niche tools

Pick Your Automation Poison

AMONG AUTOMATION BUFFS, there’s a mini-debate

raging on which is better: procedural- or model- driven automation. At a high level, procedural languages like Chef describe how to get to a desired state; mod- el-driven languages like Puppet describe what a desired state should look like. In the words of Puppet Lab’s Scott Johnston, “Chef dictates the how; Puppet dictates the what.” The lion’s share of legacy scripts are proce- dural, and thus probably the most familiar. Ad- vocates say they provide the most flexibility, too. Others argue that a more model-driven ap- proach is easier to maintain. “[Procedural] if-then- else can get very brittle, because there are a lot of downstream changes that you need to make,” said Derek Townsend, ServiceMesh vice president of product marketing. n

for needsthe larger player doesn’t meet. “Think of it as a shopping mall with a big anchor store and lots of smaller boutiques,” O’Donnell said. Costs, however, may put those products out of reach for a lot of shops, hence the popularity of open source tools like Chef, Puppet, Salt and Ansible, said Michael Coté, research director for infrastructure software at 451 Research. “With open source, you don’t need to spend any money—if you know what you’re doing. That’s a lot better than the seven figures you’ll spend with BMC,” he said. But low-level tools only go so far, and O’Donnell said that users should expect to see those small companies evolve their products to focus more heavily on process automation—“where the big guys have dominated.” Meanwhile, large vendors are increasingly opening their environments to support open source tooling. For instance, VMware vCAC handles process orchestration, but turns to Puppet Lab’s Puppet to do actual provisioning work, Coté pointed out. Automation users say to focus on what you’ve got. “Our guiding principle is Just Enough Technology, or JET,” said the engineering architect at the large finan- cial services organization. “If an open source tool does enough, then we leverage those things. If we already own comparable software, then we use that.” If this approach sounds like it will leave you with a lot of tools, you’re right—but you’re not alone. Gartner’s Colville said she routinely talks with customers considering nine or more automation tools. “Our prediction is that enter- prises will have no less than four separate automation tools in their shop through 2017,” she said.

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

THE INEVITABILITY OF AUTOMATION

Just because automation is messy and complex is not a reason to avoid it. For one thing, you don’t need to automate everything you do, right away. It’s easy—and advisable—to start small. “We tell customers to automate just one thing. Don’t try and boil the ocean,” said Scott Johnston, director of strat- egy and marketing at Puppet Labs, the force behind the Puppet configuration management language. “Start with something that is causing you pain, like root password administration or LDAP configuration, and once you get value from that, go up the stack.” But it would be a mistake to stop there, said Forrester’s O’Donnell. “We’re in the midst of an IT industrial revolution, and a lot of cherished roles in IT are going away,” he said. In the future, building repeatable, scalable systems through automation will be at the core of what every system ad- ministrator does, if it isn’t already, and there will be a much greater emphasis on software-development skills.

With that on the horizon, IT professionals should heed that warning and wherever possible automate the tasks they do today, to become more like application developers themselves.

“AUTOMATE JUST ONE THING. DON’T TRY AND BOIL THE OCEAN.”

Scott Johnston, director of strategy and marketing at Puppet Labs

“Who better than you to automate the job that you already do?” O’Donnell asked. In the end, “it’s better to become the automator than to be automated.” n

ALEX BARRETT is editor in chief of Modern Infrastructure.

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

Survey Says

The latest from IT pros on mobility in 2014

D What is the primary reason you will not provide mobile devices to your employees in 2014?

35% Only specific departments will be given mobile devices

08% We’ll provide them in 2015

04% Other

05% We don’t have the budget

08% They don’t need them

40% They already use them

0 4% — Other 0 5% — We don’t have the budget 0 8% — They

N=632; SOURCE: 2014 IT PRIORITIES SURVEY

D What is your main goal with tablets?

22%

Reduce cost

of computing

for personnel

with limited

computer

needs

n

n

n

n

n

n

n

n

n

n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n

28%

Implement

a dedicated

device for

mobile

workers

n n n n n n n n n

n
n

50%

Accommodate end-user demand to integrate into corporate network

N=742; SOURCE: 2014 IT PRIORITIES SURVEY

74

The percentage of companies

that will be giving more employees

mobile computing devices

(laptops, tablets, smartphones)

in 2014.

N=861; SOURCE: 2014 IT PRIORITIES SURVEY

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

FIRST LOOK

Virtualized GPUs Bring VDI Home

What GPU virtualization is doing for virtual desktop infrastructure.

BY ALYSSA WOOD

GPU VIRTUALIZATION MEANS VDI can go where it’s never gone before—and that’s big news for the still-niche desk- top delivery technology. Virtual desktop infrastructure (VDI) has become viable for more types of users, thanks to virtualized graphics processing unit (GPU) cards, which offload graphics pro- cessing to the server, improving application performance. Users that access 3-D or computer-aided design (CAD) applications, as well as video-intensive and gaming apps, won’t see solid VDI performance without some kind of processing offload, said Todd Knapp, the CEO of Envision

Technology Advisors. “CAD doesn’t work in VDI without this technology,” he said. Before GPU virtualization came along, VDI was primar- ily used by task workers. Desktop virtualization from Ci- trix and VMware could be deployed for about 60% to 70% of users before coming up against pockets of users that required more GPU power, according to Justin Boitano, a director of marketing for Nvidia, the primary provider of virtualized GPU technology. “They would hit users that have these graphics needs, and they weren’t able to fully meet those needs,” he said. To keep up with the increasing needs of power users, desktop virtualization providers have jumped at the chance to support Nvidia’s GRID technology. Citrix re- cently added hardware GPU sharing to XenDesktop 7, and VMware introduced the virtual dedicated graphics acceleration feature in View 5.3, both based on GRID. Plus, Amazon Web Services in November released a G2 instance of its Elastic Compute Cloud with support for GRID, to enable GPU acceleration in the cloud.

WHERE THE VIRTUALIZED GPUS ARE

Florida Atlantic University’s IT department began using VDI four years ago in an attempt to provide remote access

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

Nvidia’s GRID technology is leading the way for GPU virtualization.

to 3-D apps for students and professors in graphics and game programming classes—a group that makes up nearly 30% of its user base. They installed physical workstations with ATI graphics cards and Teradici Hardware Accelera- tor chips—but it wasn’t enough, said Mahesh Neelakanta, a director of technical services at FAU. That setup caused a high physical footprint and only allowed for a one-to-one connection between user and machine. “GPU virtualization is changing all that,” Neelakanta said. The university installed Nvidia K1 and K2 boards earlier this year, which provide more flexibility and consolidation by allowing IT to run about eight to 12 users per shared GPU board. “We’ve been able to lower our own IT requirement

because we’re able to deploy the image using VDI and provide virtual desktops with 3-D acceleration to those students regardless of where they are,” Neelakanta said. With virtualized GPUs bringing desktop virtualization to more users, VDI could see higher adoption in the com- ing years. Still, companies must make sure that GPU virtu- alization will actually provide them benefits. For example, if you’re delivering video, you may simply need remote dis- play protocol optimization instead, Knapp said. There has been a lot of hype around this technology, so businesses should choose virtualized GPU technology wisely. n

ALYSSA WOOD is site editor for SearchVirtualDesktop.com. Write to her at awood@techtarget.com or follow @VirtDesktopTT on Twitter.

SERVER VIRTUALIZATION

2>1

When Two Hypervisors Are Better Than One

Multi-hypervisor environments are becoming common, but will they end up as collateral damage in the virtualization war?

BY BETH PARISEAU

VMWARE’S VSPHERE HYPERVISOR remains the dominant

player in the server virtualization market, but as alterna- tives like open source KVM and Microsoft’s Hyper-V ma- ture, some enterprises are hedging their bets by running multi-hypervisor environments. The major benefit enterprises glean from the heteroge- neous approach is usually financial savings. “It has the potential to save us hundreds of thousands of dollars,” said Searl Tate, director of engineering for inter- national law firm Paul Hastings LLP, based in Los Angeles. The firm is rolling out Windows Server 2012 R2 Hy- per-V in a newly built European data center. That plat- form, released to manufacturers just last August, is widely regarded as bringing Hyper-V to full competitive strength against vSphere. It enables features such as shared-noth- ing live migration of virtual machines and built-in repli- cation through integration with System Center Virtual Machine Manager (SCVMM). “We’re a Microsoft shop by way of entitlement—we have an enterprise licensing agreement—but like a lot of others we haven’t run Hyper-V historically except in the lab,” Tate said. “The indirect savings comes from not having to add VMware licenses—in our European data

HOME
HOME

MODERN INFRASTRUCTURE • JANUARY 2014

11

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

center, that would’ve been $250,000.” But these savings can come at a cost of their own; oper- ational challenges are also inherent in managing two dif- ferent IT infrastructure platforms in the same data center.

FANTASIZING ABOUT TRUE FULL MANAGEMENT TOOLS

Enterprises considering heterogeneous environments need to weigh the potential capital expenditure (Capex) savings against the operational expenditure (Opex) neces- sary to manage multiple hypervisors, according to David Kinsman, national technical solutions architect for World Wide Technologies Inc., based in St. Louis, Mo. “We’re moving into a time where customers will have more than one hypervisor in their data center, siloed off potentially into different application pods based on Capex costs,” Kinsman said. Tools that can manage multiple hypervisors under one console include SCVMM and HotLink Corp.’s SuperVI- SOR software, which is used by heterogeneous hypervisor shops to manage other hypervisors under the vCenter Server management console. “Now that everything is tied into vCenter, we only have to teach new admins one platform,” said Michael Warchut, senior network engineer for Monsoon Commerce, an e-commerce firm based in Portland, Ore. which uses Su- perVISOR to manage some 300 vSphere VMs, 35 Hyper-V VMs and about 40 XenServer instances, as well as Amazon Web Services’ Elastic Compute Cloud. “We’re able to keep a record of who does what and manage our security notifications [through HotLink] as

well,” he said. But these tools also have limitations, Kinsman pointed out. “You’re never going to get the real feature parity you’d get if you were managing a given hypervisor with its own manager,” Kinsman said. Until there are virtualization management platforms on the market that offer full feature parity when managing heterogeneous hypervisors, there will be inefficiencies to running more than one hypervisor in the data center, Kinsman said, and predicted that without this type of management platform the market will move back to more of a homogeneous hypervisor state. “The odds of such a manager existing in the next two to three years are slim,” he said. Paul Hastings’ Tate said his team will keep vCenter to manage vSphere and SCVMM to manage Hyper-V as separate consoles. “We have also rolled some of our own tools, some in PowerShell, some in C#, and we wrote our own infra- structure portal manager,” Tate said. “We’re going to have to live that way.” Because of the hassle involved in managing heteroge- neous hypervisors, some industry watchers believe hetero- geneous hypervisor environments are a transitional phase in the deployment of virtualized infrastructure, rather than a permanent state of affairs. “We’re eyeing the possibility of Hyper-V everywhere, but you’ve got to start somewhere,” Tate said. Despite its cost, some industry watchers say the win- ner of that movement back to homogeneity will still be VMware, because of the way it’s branching beyond the

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

hypervisor and into infrastructure services such as soft- ware-defined networking and storage. “Part of the pull back toward VMware is going to be VMware and Nicira together, as well as a whole bunch of companies recently founded by people who used to work at VMware supporting VMware’s hypervisor first,” said Andy Brown, an entrepreneur who until recently served as group CTO at global financial services firm UBS. Others see a very different picture over the next few years. “Hyper-V is not going anywhere anytime soon, and neither is vSphere,” said Warchut.

WHEN WORLDS COLLIDE

As cloud computing comes online, infrastructure intelli- gence is moving up the stack, beyond the hypervisor to the overall cloud orchestration or application layer—amping up the debate about heterogeneous IT management as a transitional or permanent state of being all over again, with some new players added to the mix. There are also cloud management tools on the market today which can manage both multiple cloud environ- ments and their underlying heterogeneous hypervisor environments, but it is still early days for cloud in the enterprise. For now, experts say, keeping options open is a wise choice as the cloud market continues to evolve. “I think what happens is that the conversation moves to the next level up, which is that I need to manage multiple clouds as well as multiple hypervisors, and how do I do

that?” said Brown. While the enterprise virtualization market is divided primarily between VMware and Hyper-V, cloud comput- ing platforms are also bringing open source KVM into the spotlight at some companies. “Whenever anybody’s starting or they have the ability to start from scratch, they’re running KVM,” said Mark Shirman, president and CEO of cloud migration firm RiverMeadow Software, based in San Jose, Calif. “So much of the cloud is still being architected by geeks,” Shirman said. “They love the open source environ- ment … everybody wants to get into the weeds.” Unlike Hyper-V, however, open source KVM is far from feature parity with vSphere. Under the much-hyped OpenStack cloud management platform, for instance, KVM doesn’t offer live migration, distributed resource scheduling or automated restarts of machines for high availability, according to Kenneth Hui, open cloud archi- tect with Rackspace Hosting based in San Antonio, Tex. Still, Hui argued before a meetup group of OpenStack enthusiasts in Minneapolis on Oct. 22 that these features will belong to the cloud orchestration layer eventually, rather than the hypervisor. “Servers are fragile, but the cloud is not,” Hui said. “Resiliency should be handled at various layers of the cloud, primarily at the application layer and not the server layer.” n

BETH PARISEAU is senior news writer for SearchCloudComputing.com. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

Five New Devices

Coming soon to an office near you

Never Five New Devices Coming soon to an office near you AMAZON KINDLE FIRE HDX Early

AMAZON KINDLE FIRE HDX Early Kindle Fire tablets weren’t exactly ready for the enterprise—they didn’t even have a native email client. Amazon has wised up since then, to the point that the new Kindle Fire HDX carries the tagline, “Built for work and play.” The new tablets ship with the OfficeSuite productivity app installed, and addi- tional enterprise features, such as built-in VPN and hardware-based encryp- tion, are on the way.

APPLE iPAD AIR AND iPHONE 5S Most of your company’s Apple users have likely updated to iOS 7 by now, so the post-holidays influx of the iPad Air and iPhone 5s won’t pose much of a challenge for IT. But there are some device-specific features to pay attention to, like the Touch ID finger- print scanner in the iPhone 5s. Business leaders should also consider how these new devices can enable more productivity. Buyers will receive a free copy of the iWork suite, which lets users create documents, spreadsheets and presen- tations with Microsoft Of- fice compatibility.

and presen- tations with Microsoft Of- fice compatibility. AMAZON KINDLE FIRE HDX: BUSINESS WIRE; APPLE PRODUCTS:

AMAZON KINDLE FIRE HDX: BUSINESS WIRE; APPLE PRODUCTS: TIM RT/FLICKR; MICRO- SOFT SURFACE 2: MICROSOFT; SAMSUNG GALAXY NOTE 3: KARLIS DAMBRANS/FLICKR

2: MICROSOFT; SAMSUNG GALAXY NOTE 3: KARLIS DAMBRANS/FLICKR MICROSOFT SURFACE 2 The Surface Pro 2 is

MICROSOFT SURFACE 2 The Surface Pro 2 is Mic- rosoft’s enterprise-class tablet, but its little brother, the Surface 2, was the big- ger seller during the holi- day season. The Surface 2 comes with the RT version of Windows 8.1, which runs on an ARM-based proces- sor and therefore can’t run Windows desktop (i.e., legacy) applications. This architecture means the Sur- face 2 can’t join a domain either, but a new feature called Workplace Join lets the device connect securely to certain corporate assets. With a mobile device man- agement tool, IT can then exert some controls over the tablet.

SAMSUNG GALAXY NOTE 3 The mobile worker who

doesn’t want to carry a phone and a tablet makes

a good candidate for the

phablet. The Galaxy Note is the most popular of these

hybrid devices, and Sam- sung has aimed the latest

version squarely at busi- ness users. The screen is big enough to have multiple apps open at once, and the

S Pen stylus offers ways to

interact more effectively with those apps’ data. The Galaxy Note 3 also offers Samsung’s SAFE set of en- terprise security features, and it’s compatible with the KNOX secure container technology. —COLIN STEELE

it’s compatible with the KNOX secure container technology. — COLIN STEELE MODERN INFRASTRUCTURE • JANUARY 2014
Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

EXPLAINED

Leaf-Spine

Architecture

A new type of network may shake up the inner workings of the data center.

BY ETHAN BANKS

FOR MANY YEARS, data center networks have been built in layers that, when diagrammed, suggest a hierarchical tree. The bottom of the tree is the access layer, where hosts connect to the network. The middle layer is the aggregation, or distribution, layer, to which the access layer is redundantly connected. The aggregation layer provides connectivity to adjacent access layer switches and data center rows, and in turn to the top of the tree, known as the core. The core layer provides routing services to other parts of the data center, as well as to services outside of the data center such as the Internet, geographically separated data centers and other remote locations. This model scales somewhat well, but it is subject to

bottlenecks if uplinks between layers are oversubscribed. This can come from latency incurred as traffic flows through each layer and from blocking of redundant links (assuming the use of the Spanning Tree Protocol, or STP). In modern data centers, an alternative to the core/ aggregation/access layer topology has emerged known as leaf-spine. In a leaf-spine architecture, a series of leaf switches form the access layer. These switches are fully meshed to a series of spine switches. The mesh ensures that access-layer switches are no more than one hop away from one another, minimizing latency and the likelihood of bottlenecks between ac- cess-layer switches. When networking vendors speak of

ac- cess-layer switches. When networking vendors speak of CORE AGGREGATION ACCESS MODERN INFRASTRUCTURE • JANUARY

CORE

AGGREGATION

ACCESS

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

CORE SPINE
CORE
SPINE

LEAF

an Ethernet fabric, this is generally the sort of topology they have in mind. Leaf-spine architectures can be layer 2 or layer 3, mean- ing that the links between the leaf and spine layer could be either switched or routed. In either design, all links are forwarding; that is, none of the links are blocked because STP is replaced by other protocols. In a layer 2 leaf-spine architecture, spanning-tree is most often replaced with either a version of Transparent Interconnection of Lots of Links (Trill) or Shortest Path Bridging (SPB). Both Trill and SPB learn where all hosts are connected to the fabric and provide a loop-free path to their Ethernet MAC addresses via a shortest-path-first computation. Brocade’s VCS fabric and Cisco’s FabricPath are exam- ples of proprietary implementations of Trill that could be used to build a layer 2 leaf-spine topology. Avaya’s Virtual

Enterprise Network Architecture can also build a layer 2 leaf-spine, but instead implements standardized SPB. In a layer 3 leaf-spine, each link is a routed link. Open Shortest Path First is often used as the routing protocol to compute paths between leaf and spine switches. A layer 3 leaf-spine works effectively when network virtual LANs are isolated to individual leaf switches or when a network overlay is employed. Network overlays such as VXLAN are common in highly virtualized, multi-tenant environments such as those at Infrastructure as a Service providers. Arista Networks is a proponent of layer 3 leaf-spine designs, providing switches that can also act as VXLAN Tunnel Endpoints. n

ETHAN BANKS, CCIE #20655, has been managing networks for higher ed, government, financials and high tech since 1995. Ethan co-hosts the Packet Pushers Podcast. Contact him at @ecbanks.

DATA CENTER PERFORMANCE Uptime: The Heart of the Matter Data center uptime may be IT’s
DATA CENTER PERFORMANCE
Uptime:
The Heart
of the Matter
Data center uptime may be IT’s holy grail.
Technology and people both play a big role
A WORKING, ALWAYS-AVAILABLE IT platform is a core re-
in the quest. BY CLIVE LONGBOTTOM
quirement of any organization. However, IT’s goal of
“dial tone” availability over the years has never quite
materialized.
Maybe we are getting closer to achieving this vision,
thanks to newer technical architectures such as virtual-
ization and cloud computing. But new technologies only
go so far. If organizations really want to improve their
uptime, they need to focus on three core principles: auto-
mation, modularity and redundancy.
TWO, FOUR, SIX, EIGHT, WHAT CAN WE AUTOMATE?
If the goal is uptime, the first area that needs to be ad-
dressed is not the silicon-based equipment that makes up
the enterprise data center, but the carbon-based life-forms
who nominally maintain and update it.
Unfortunately, people are the main cause of downtime
in a modern data center. Poor scripting, applying patches
incorrectly, unplugging the wrong piece of equipment—if
you need something done completely wrong, bring in a
human.
Fortunately, much of what is needed to keep systems
KANTVER–FOTOLIA
HOME
HOME

MODERN INFRASTRUCTURE • JANUARY 2014

17

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

running these days can be done in a lights-out environ- ment. It’s now possible to automate patches, updates and any number of other software tasks, such as provisioning and deprovisioning applications. Many problems are caused by attempts to apply a patch or upgrade to an ineligible system, such as when there is insufficient storage on a server or when a specific device driver is required but not available on the machine. Good tools should automatically identify such issues before attempting any action. They should either fix them auto- matically or raise an exception to an admin and skip the action until a human has dealt with the problem. Automation tools should also be able to monitor and report on the status of not only individual applications, but also all the apps that that support enterprise processes. It is a waste of time to start a process if the last part of the process cannot be completed because a downstream ap- plication or piece of hardware has failed. Better to identify any problems early, and then look at remediating them in real time. This may involve moving a virtual machine (VM) from one physical environment to another, along with all of its dependencies around storage and networks. Again, this can be done rapidly and effectively through automation. By catching problems at an early stage, the move can be made in real time and systems switched over without any noticeable change for workers. This proactive approach has so much more going for it than that of a standard re- active response. Waiting for users to phone the help desk and then sending people into a data center to address a problem is no good to a modern organization.

Again, avoid human intervention as much as possible. Machines rarely do anything wrong—they carry out the same activity time after time without deviating from the rules provided to them. If the rule is programmed cor- rectly the first time, the data center will continue to do it correctly from there on, time after time after time. A staffer may have done the same task correctly 99 times— and then have an off day or just an off moment on the 100th occasion. Use automation, and get people to focus on getting that first-time rule coded correctly.

MODULAR, NOT MONOLITHIC

In a virtualized, cloud-based environment, it is actually quite unlikely that the failure of an individual piece of hardware will cause a data center to have appreciably lower overall availability. Older applications are generally the problem. Having large, monolithic applications causes difficulties even within the world of superfast virtual- ized environments. Provisioning and spinning up a new virtual machine containing a full stack, from operating system through to a full instance of SAP ERP or Oracle E- Business Suite, will take time because of scale and complexity. Moving towards a composite application approach can really help here. The first job is to take the business process, break it down into a set of tasks and then see what technical capabilities are required to facilitate each of these tasks. By finding the right technical functions as small pieces of capability and pulling them together on an as-needed basis, you can get a greater level of flexibility.

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

Processes can be changed, and only the tasks that are affected require new technical components. In addition, you gain much higher availability. For example, take a process that consists of five tasks. Each is facilitated by a different technical function. One fails—for whatever reason. The same technical platform can be spun up far more quickly than if that same function failed as part of a monolithic application, where the whole stack would have to be reprovisioned. Indeed, since the other four functions are still capable of running, activities can be carried on while the failed component is fixed. Assuming that the organization is storing and forwarding transactions correctly, individuals can still carry out their own parts of the overall process, even during an extended outage.

DOUBLE DOWN ON REDUNDANCY

Although I’ve said that hardware is not the real issue, don’t take that as an excuse not to protect the data center against equipment failure. Engineering for uptime re- quires a degree of equipment redundancy. This goes not just for servers and storage, but for the network and the data center facility as well. Virtualized networks allow for dynamic reallocation of network connections should a network interface card fail or a specific route become congested. Modular chillers, uninterruptible power sup- plies and auxiliary generators allow facilities to survive equipment failures. For basic uptime, go for one more piece of equipment than is required (N+1). For higher levels of uptime, go

for more items of redundant equipment (N+M). For the highest levels of platform uptime, consider long-distance mirroring. Businesses that cannot tolerate any downtime what- soever need complete mirroring in real time of live VMs, storage and virtual network dependencies across a suit- able distance. Redundancy must be built into how the two facilities are networked together, via multiple WAN connections operated by different carriers. Obviously, costs are pretty prohibitive, so make sure that this is really necessary. In many cases, the business will actually be well served by live synchronized data backed up by on-demand re- sources for spinning up VMs. Application images can be spun up rapidly, matching against the data within minutes in many circumstances. There will be a hit on availability while the images do spin up, but the lower cost of not having to maintain two hot facilities can make this good enough for the majority of an organization’s needs. The key is to automate wherever possible. Keep humans away from IT systems wherever possible, and use suitable tools to provide repeatable approaches to common tasks. Architect for failure; use redundancy for failover, but make sure that you understand what the business means by “highly available.” In many cases, you will find that it really means “minimize downtime and maintain data in- tegrity.” This data center approach is different—and can save an organization millions of dollars. n

CLIVE LONGBOTTOM is the founder of Quocirca, an IT research and analysis company based in the U.K.

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

Overheard at Amazon re:Invent

“Three years ago, every cloud conversation was about security, and we wrote stacks of white papers that are now getting dusty. Today, the one question is about performance.”

security, and we wrote stacks of white papers that are now getting dusty. Today, the one
security, and we wrote stacks of white papers that are now getting dusty. Today, the one

—ANDRES RODRIGUEZ, CTO, Nasuni

“THERE IS THIS NETFLIX DRINKING GAME GOING ON. EVERY TIME WE MENTION NETFLIX ONSTAGE, YOU’RE SUPPOSED TO TAKE A SHOT.”

—WERNER VOGELS, AWS CTO, during the second-day keynote

“We’re engaged in a ‘proof of pricing’ exercise with AWS. We believe that the process efficiencies will be there, but we’re not so sure about the cost efficiencies.”

—Lead systems engineer for a US federal regulating agency

“WE WILL PURSUE PRIVATE CLOUD DEALS IF THE OPPORTUNITY IS LARGE ENOUGH TO WARRANT THE DISTRACTION.”

—ANDY JASSY, head of AWS, when asked if private clouds similar to the one Amazon built for the CIA will be coming

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

IN THE MIX

Cloud Nirvana Madden: Better Late Than Never IN THE MIX Less Is More The IT industry

Less Is More

The IT industry talks the talk on green computing, but talk is cheap.

BY BOB PLANKERS

WHILE THE IT industry has paid lip service to the idea of “green computing,” the truth is that there is nothing en- vironmentally friendly about technology. From the toxic substances used in hardware to the vast amounts of power consumed by data centers, the only thing that is ever truly green is the color of the money we spend, at least in the U.S. The public cloud changes things a bit, in the same way that electric cars change things. Electric cars still cause pollution; it’s just that the pollution is centralized to where power is generated, and there is less of it per car because of the economies of scale. Public clouds still consume vast amounts of power, but that power is increasingly from places with capacity to generate it cheaply. In addition, there is also the concept of economy of scale, which forms the basis for all things related to public cloud.

FULL OF HOT AIR

Technology certainly drives some improvements that benefit the environment. But it’s hard to reconcile. And the latest disk drives from HGST/Western Digital have brought these ideas to the forefront of my mind. Their new 6 terabyte disk drives are filled with helium, instead of traditional air. That sounds interesting, until you think about the nature of helium. The air inside a drive is a problem for manufacturers. It’s a mix of gases, its density changes, and it holds heat and moisture and causes friction with the moving parts. In turn, the drives need larger motors, which consume more power and generate more heat. These factors also affect how close the platters in a drive can be, which directly affects storage capacity and density. Helium is a much lighter gas, and is considerably less dense, so HGST can produce a drive with 50% more ca- pacity than anything else using 23% less power at idle. He- lium is also inert, so it won’t react with the metals inside the drive. Nor will it catch fire, like hydrogen. (Hydrogen would perform better inside a drive because it’s even lighter, so long as you didn’t mind your array blowing up from time to time.) Helium is abundant, but it’s too light to stay in Earth’s atmosphere, and it isn’t cost-effective to make helium. Our helium comes from radioactive decay inside the Earth’s crust. Over millions of years that helium floats up through

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

the Earth and gets trapped in the same pockets as natural gas. It’s also harvested in much the same way. We owe our latest innovation in storage technology to the same folks

LESS IT, IN ALL REGARDS, IS TRULY MORE.

that are in the news for hydraulic fracturing, or “fracking” —arguably not the greenest of energy sources. At the same time, IT is competing with medical and scientific uses for helium. As an example, many high-tech- nology medical scanners need to be cooled with liquid

helium. I can’t help but think about which I’d rather not have: A party balloon, twice as many movies on disk, or cancer. In IT we’re all innately used to tradeoffs, but perhaps we don’t realize what we are trading. Are we trading disk capacity and cost for the ability to cure cancers or do science research? Are we trading disk capacity for clean, nonflammable water in communities around the world? There’s a saying among storage people that the best I/O is the one you didn’t have to do. Perhaps that is also the key to green computing: Less IT, in all regards, is truly more. n

BOB PLANKERS is a virtualization and cloud architect at a major Midwestern university. He is also the author of The Lone Sysadmin blog.

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

ARE WE THERE YET?

Nirvana Madden: Better Late Than Never ARE WE THERE YET? The End of Cloud Nirvana Cloud

The End of Cloud Nirvana

Cloud computing shows vulnerabilities as technological reality sets in.

BY JONATHAN EUNICE

“CLOUD COMPUTING” IS often pitched as the ultimate in IT. Infinitely malleable, it’s whatever package of flexibility, economy and let-someone-else-deal-with-the-hard-parts the proponent wants it to be. Events in recent months, however, have ripped deeply into the credibility of visions of cloud nirvana, reinforcing the value of in-house IT capability. Not least is the failure of Nirvanix, a well-funded provider of enterprise storage in the cloud. Suppliers fall in every industry, but Nirvanix departed in a way that left customers deeply disappointed—not just about one company, but about the prospects for cloud services overall. Nirvanix appeared to be doing well, and then it faced sudden financial meltdown. With little warning, it told

clients they had just two weeks to retrieve their data and make other arrangements. Two weeks would be a hellacious timeline for the most agile Web shop. But for enterprise IT shops running databases, applications and analytics that their businesses depend on day to day and minute to minute, that’s insanely little warning. Even optimistically assuming that a suitable alternate infrastructure or service was immediately available, two weeks isn’t much time to get data out of Nirvanix, onto an alternate infrastructure, qualified for production use, and then up and running. That’s especially true during a high-stress period when every other customer is rushing to do the same thing. This kind of failure mode, in which everyone freaks out all at once, affects other shared services, such as those for Disaster Recovery as a Service. Recovering from a single-business failure like a data center fire is a great use for cloud computing. But if a storm, an earthquake or another disaster affects a wider area, everyone nearby will be forced to evacuate, fail over or restore simultaneously. The shared, amortized cost model that makes cloud look magically cheap is less appealing when everyone bangs on that shared infrastructure at the same time. There proba- bly won’t be enough resources to go around—at least not with the great performance and responsiveness one can see when those shared resources aren’t running at frantic, historic high-water levels.

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

YOU CAN GO HOME AGAIN

Nirvanix’s failure is noteworthy because we’re deep into the so-called Cloud Age. But it’s actually just the latest in a series of failed storage service offerings going back

a decade. See also Cirtas and StorageNetworks, among

others. “Go with big, proven, stable providers” is trusted advice against the vagaries of startups. But Iron Mountain and EMC also shuttered storage-in-the-cloud services in recent years. Where exactly is an enterprise supposed to go for safe, solid ground? Home. Back to providing—or at least managing—those services in-house. Or to a multisource strategy. Availability isn’t the only issue. 2013 also revealed a

widespread lack of network and data privacy. In June, Goo- gle argued its legal position: “A person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.” What?! This directly contravenes Google’s privacy policy, not to mention numerous laws, social norms and business contracts the world over. That

a core provider could officially promote such a careless

attitude is chilling to personal, much less business, use of cloud. At the time, I wondered if you could stab cloud’s promise in the heart any harder. It turns out you can. Later in the year, we learned that national security agencies are siphoning off large swaths

of all telephone records and data communications. One program, “MUSCULAR,” apparently taps all Google in- ter-data center traffic. That shocked even Google. The once-theoretical risks around data privacy and security have become urgent concerns.

WHERE EXACTLY IS AN ENTERPRISE SUPPOSED TO GO FOR SAFE, SOLID GROUND?

Now, cloud computing isn’t going away. Even with proven privacy and availability exposures, the economics and opportunities remain compelling for many uses. The extreme “Everything will be cloud!” mania, however, is dead. Most enterprises won’t stand for it. It would be negligent to do so. They’ll use cloud resources but in measured, cautious, hybrid ways. That makes the ability to provide flexibility, efficiency and elastic scalability in- house—in other words, modern infrastructure—central to the enterprise IT mission. n

JONATHAN EUNICE is principal IT adviser at analyst firm Illuminata Inc. Contact him at moderninfrastructure@techtarget.com.

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

END-USER ADVOCATE

Nirvana Madden: Better Late Than Never END-USER ADVOCATE Better Late Than Never The device management world

Better Late Than Never

The device management world may finally be catching up with its users.

BY BRIAN MADDEN

MODERN ENTERPRISES ARE awash in a myriad of platforms:

Windows, Linux, Mac, iOS, Android, BlackBerry. But there’s no single management framework that can manage them all. Even if you focus just on the end-user devices, you need System Center to manage Windows desktops, BlackBerry Enterprise Server to manage BlackBerrys, a thin client management tool to manage your thin clients, and some kind of mobile device management (MDM) or enterprise mobility management (EMM) suite to manage iOS and Android phones and tablets. Unfortunately, each of these requires a different skillset, team and technique, which ultimately leads to a fragmented end-user manage- ment environment. Fortunately the winds of change are upon us, thanks to several recent advancements.

TOWARD UNIFIED DEVICE MANAGEMENT

For Windows desktops, Windows 8.1 has a new feature called “Workplace Join.” Prior to 8.1, if you wanted to manage a Windows client, you had to join it to a domain. This worked fine for corporate-owned devices, but it didn’t make sense for home computers or for users who wanted to use their own devices. (Could you imagine domain-joining a BYOD laptop? It was the equivalent of the IT department “rooting” a user’s computer.) Windows 8.1’s Workplace Join provides a middle ground between a full domain join and a completely unmanaged device. A Workplace Joined client can securely access corporate resources when it needs to (giving IT some peace of mind) while still al- lowing the user to have full control of their laptop when they’re not accessing corporate resources. Windows 8.1 also adds support for the Open Mobile Alliance Device Management API, which allows organiza- tions to manage client settings via MDM and EMM tools like those from MobileIron, AirWatch or Citrix. This is great because it means you can use a single tool to manage your phones, tablets and laptops. Thin clients can receive similar treatment. While there have always been thin clients based on Windows Em- bedded OSes, their cost and complexity limited them to specific corners of the IT world. The vast majority of thin clients have traditionally been powered by Linux, meaning that IT shops had to run proprietary management suites

Home Editor’s Letter Automate or Else Survey Says GPU Virtualization When Two Hypervisors Are Better

Home

Editor’s Letter

Automate or Else

Survey Says

GPU Virtualization

When Two Hypervisors Are Better Than One

Five New Devices

Leaf-Spine

Architecture

How to Get Better Data Center Uptime

AWS re:Invent 2013

Plankers:

Less Is More

Eunice: The End of Cloud Nirvana

Madden: Better Late Than Never

to secure and manage them. But now we’re starting to see thin clients running on Android. Sure, older versions of Android were built for touch interfaces and small screens, but recent builds have broader support for physical keyboards, mice and track- pads, and we’re starting to see Android-based laptops and convertible devices. Thin client makers are taking note, and now there are several thin clients on the market that run Android instead of Linux. These thin clients plug into regular screens, keyboards and mice, and they use the Android versions of desktop client software to connect to remote computing environments via Citrix, VMware and Microsoft. Android thin clients also have the ability to run Android applications locally, meaning they run main- stream Android doc sharing, file syncing and Web browser applications instead of obscure Linux desktop products. The real benefit of an Android thin client, however, is that you can manage it with the exact same MDM or EMM tools that you use to manage your phones and tablets! Mac desktops and laptops can be part of this too. The latest version of the Mac OS X (Mavericks) has MDM-like extensions that allow Mac OS X devices to be managed with the same Apple Profile Manager software that you use to manage iPhones and iPads. So overall, things are looking good for enterprise end- user device management. It’s 2014, and it’s finally possible to consolidate your management tools while still being able to manage all the different platforms that your users require. n

BRIAN MADDEN is an opinionated, supertechnical, fiercely independent desktop virtualization and consumerization expert. Write to him at bmadden@techtarget.com.

Modern Infrastructure is a SearchDataCenter.com publication. Margie Semilof , Editorial Director Alex Barrett , Editor
Modern Infrastructure is a SearchDataCenter.com publication. Margie Semilof , Editorial Director Alex Barrett , Editor

Modern Infrastructure is a SearchDataCenter.com publication.

Margie Semilof, Editorial Director

Alex Barrett, Editor in Chief

Christine Cignoli, Senior Site Editor

Phil Sweeney, Managing Editor

Eugene Demaitre, Associate Managing Editor

Laura Aberle, Associate Features Editor

Linda Koury, Director of Online Design

Rebecca Kitchens, Publisher, rkitchens@techtarget.com

TechTarget, 275 Grove Street, Newton, MA 02466 www.techtarget.com

© 2014 TechTarget Inc. No part of this publication may be transmitted or re- produced in any form or by any means without written permission from the publisher. TechTarget reprints are available through The YGS Group.

About TechTarget: TechTarget publishes media for information technology professionals. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and pro- cesses crucial to your job. Our live and virtual events give you direct access to independent expert commentary and advice. At IT Knowledge Exchange, our social community, you can get advice and share solutions with peers and experts.

COVER PHOTOGRAPH AND PAGE 3: GETTY IMAGES/ISTOCKPHOTO