Sie sind auf Seite 1von 23

Musings on data centres Volume 2

This report brings together a series of articles first published through ComputerWeekly
January 2014

2013 Continued the era of big changes for data centres. Co-location services continued to increase; cloud computing became more of a reality; software defined was applied to anything and everything. Keeping up to speed with the changes is difficult at the best of times and that is why Quocirca has pulled together a set of articles written by Quocirca for ComputerWeekly throughout 2013 as a single report.

Clive Longbottom Quocirca Ltd Tel : +44 118 9483360 Email: Clive.Longbottom@Quocirca.com

Copyright Quocirca 2014

Musings on data centres Volume 2

Musings on data centres Volume 2


This report brings together a series of articles first published through ComputerWeekly Data centre resolutions make them, and stick to them Data centre environmental monitoring and metrics Oh dear. Our provider has gone bust. Moving data and applications in the cloud The Software Defined Data Centre is it all rosy? Sweating IT assets is it a good idea? Managing the software assets of an organisation. How to check the health of your data centre The BYOD-proof datacentre. Is your datacentre fit for high performance computing and high availability? Data centre highlights of 2013
Any New Year brings in the opportunity to review and draw up a new list of priorities. Heres Quocircas main resolutions for data centre managers that should help in creating a more fit-forpurpose IT platform within an organisation.

While IT equipment is covered by many vendors and systems management specialists, the implementation, monitoring and management of the environmental situation within a data centre is often less of a priority. This article looks at the various areas and what to watch out for.

The cloud seems like a good idea. Just like hosting did. Or the application service provider (ASP) market. However, a Plan B has to be in place to cover what your organisation has to do if your provider goes bust. If you do go to the cloud, you will want as much flexibility as you can possibly get. For this, you must understand what can, and what cannot, be done when it comes to moving data and applications around in the cloud. Software defined environments are all the rage. Networks, servers and storage each have their own SDx moniker, and now the software defined data centre (SDDC) has been mooted. Is the world ready for this? When times are hard, it is very tempting to try and get more life out of your IT equipment. However, this may not be a cost-effective way of managing your ITC platform. Even when you have a good level of control over the management and refresh of your hardware assets, there still remains the software. Many organisations have allowed their software to get out of control here are some tips about regaining control and gaining money in the process. Just how healthy is your data centre? With the way that technology and the way it is used has morphed, an organisations IT platform is now far more like a living body. How can you carry out a proper health check across the complete platform? Bring your own device (BYOD) is taxing the minds of many IT director and data centre manager at the moment. Just how can a data centre be architected so as to embrace BYOD to the benefit of the organisation and users alike? Do you need high performance computing (HPC)? More to the point, would you know if you didnt need it? If you do need it, are you sure that your data centre is up to housing a continuously running HPC system?

As 2013 closed, it was time to look back across the year and discuss what Quocirca saw as being the major happenings and news.

Quocirca 2014

-2-

Musings on data centres Volume 2

Data centre resolutions make them, and stick to them


Its that time of year again, where resolutions are made and generally broken within a few days. However, from an IT perspective, maybe its time to make some that can be stuck to not on an individual basis, but at a level that can help you better serve your business. Data centre infrastructure is critical, however it is provisioned; completely self-owned and operated, sourced from a co-location facility or procured via on-demand services from cloud service providers. Ensuring that the overall IT platform remains fit for purpose and supports the business is an imperative so here are six resolutions that will ensure this is the case. First resolution find those lost items Like searching down the back of the sofa to find that lost change, its amazing what you can find lost in a data centre. Previous research carried out by Quocirca shows that it is common for an organisations asset database to be out by +/-20% on server numbers alone. So, if you have a data centre with 1,000 servers, there could be 200 that are missing or wrongly identified and so are over or under-licensed. Cost savings can be made by carrying out a proper asset audit and the best way to do this is to implement an automated asset tracking system so that this can be carried out on a continuing basis, rather than as one-off, high cost activity on an ad-hoc basis. Second resolution shed a few pounds In many cases, the way data centres have been run over the long term has led to massive inefficiency in how equipment is utilised. Again, Quocirca research shows that many servers are running at less than 10% of their potential capacity, and storage systems are often less than 30% utilised. Consolidation of applications and virtualisation of IT platforms can drive usage rates up markedly. If a target of 50% for servers is set and achieved, that could free up 80% of existing physical servers. If nothing else, these can be turned off, saving large amounts on the electricity bill. Better still, decommission them and sell them on, saving on licensing and maintenance costs, maybe keeping some of the more modern servers mothballed so that new server purchases can be put back for a while. Third resolution exercise more control Organisations that have consolidated and virtualised still find that things can get out of control. The biggest promise of virtualisation is that it is easier to provision new images of applications and functions than it was before. However, this is also its biggest issue, as developers and even system administrators in the run time environment can find it very easy to provision a new virtual image and then forget to decommission it after it has been used. Such virtual sprawl can lead to false reading as to overall systems utilisation, as cpu and storage being used by these images is perceived as being part of the live load, yet they are carrying out no useful work. On top of this, every live image is using up licences that could be used elsewhere or not paid for in the first place. Putting in place application lifecycle management (ALM) tools will help in ensuring that such virtual sprawl is controlled. Fourth resolution get out more The self-owned and operated data centre is no longer the only option. Co-location facilities and cloud computing have expanded the options for how IT functions can be provisioned and served. The mantra for the IT department should no longer be how can we do this within the data centre?, but should be how can this be best provisioned? In many cases, this will mean that new applications and functions will be brought in from outside third parties and this will mean that overall network availability has to be more of a concern. Multiple connections to the internet are becoming more the norm, ensuring that overall systems availability is not compromised through the network connection being a single point of failure when connecting to the outside world.

Quocirca 2014

-3-

Musings on data centres Volume 2


Fifth resolution be more friendly IT can be seen as the group that likes to say no. Make 2013 the year where IT embraces change and is better at saying yes. Put in place systems that makes BYOD (bring your own device) something that is encouraged, rather than just part of shadow IT. Ensure that you are aware of how cloud computing works both in the data centre and as an external platform, and make it your job to be able to advise the business on the best direction. Sixth resolution - be more flexible With all the changes that are going on in the general economy and the way IT systems are deployed and used, IT departments need to be far more dynamic and flexible in how rapidly and effectively they respond to the needs of the business. As IT and the business embrace the needs of areas such as cloud and big data, the data centre will need to be more flexible as a facility as well as at the platform level. Dont plan to implement monolithic components go modular when it comes to uninterruptable power supplies (UPSs), backup generators, chillers and so on. This will make it easier to add or remove incremental capability as required as the data centre grows or shrinks to reflect the organisations needs. As with any resolution, the key is to make each achievable and identify the value in sticking to them. From an IT point of view, a more flexible, performant and controlled overall IT platform can be implemented through embracing relatively small changes as above. If the large scale of the value of changes can be demonstrated to the business, it will reflect well on the IT department. Heres to 2013 a year that is likely to remain challenging from an economic viewpoint, but one where IT has the capability to put itself where it needs to be at the centre of the business. Just be resolute!

Data centre environmental monitoring and metrics


The data centre is not just a collection of servers and storage systems connected via cables. It is a complex dynamic facility with a mix of different types of equipment that includes uninterruptable power supplies (UPSs) through to switches and power systems from 415V three phase AC to 5V DC and below. This complexity creates a need for monitoring for problems not just at the technical level, but also at the environmental level. The main ones that data centre managers tend to deal with are temperature, fire and water. Temperature monitoring has long been a core area for data centre managers. The idea of running a data centre at low temperatures (between 18-22 degrees Centigrade) has long been the norm, and standard temperature monitors have been used to ensure that ambient temperatures remain within specified limits. The American Society of Heating, Refrigeration and Airconditioning Engineers (ASHRAE) now allow for a data centre to run hotter provided adequate cooling is applied where it is required. This results in potentially massive energy savings because the overall cooling requirement is far lower. Monitoring the temperature of the overall data centre is therefore less of an issue but monitoring for distinct hot spots becomes more critical. If a hot spot is left alone, it can result in a fire, which can damage not only the equipment where the fires starts, but can impact other equipment through smoke damage. Temperature monitoring should therefore go hand in hand with fire monitoring. For fire, a standard approach has been to use heat-sensitive triggers that either release water or a damping gas. Thankfully, the use of water has receded over the years with most facilities only using it as a backstop measure to extinguish a fire where all else has failed, as trying to put out an early stage fire in a data centre using water results in the shorting out of lots of electrical circuits and the ruination of the vast majority of equipment. The accepted way is now to use a blanketing gas, such as naturally occurring non-flammable gases (for example, CO2, Nitrogen, Argon), or specific commercial gases such as FM200, IG55 or Novec 1230.

Quocirca 2014

-4-

Musings on data centres Volume 2


The use of heat triggers means that a fire has to already have started, and it is better to use early-detection systems. Here, very early smoke detection apparatus (VESDA) systems can help. Before a fire starts, smoke will be given off that is undetectable to the human eye or nose. VESDA systems can pick up these small particles of smoke and react according to rules placed in the system for example, raising an alarm and pinpointing where in the data centre the problem is, allowing a data centre administrator to shut down equipment in the area or to investigate further. Even earlier detection can be carried out through the use of thermal cameras. These monitor the data centre looking for thermal hot spots, and again can be programmed such that a change in temperature at a specific spot can raise an alarm. Therefore, we come full circle, using temperature monitoring to help to prevent fires in the first place. With water, the main aim here is to avoid flooding. Having a sloped under floor with drainage will allow certain flood situations to just flow directly through the data centre without causing major damage, but as with fire, it is better to aim for early detection and avoidance, rather than trying to deal with a full scale flood. Moisture sensors in the ceiling to monitor for roof leakage and the same at floor level will help in detecting slow failure of the physical environment that could lead to water egress and problems in the facility. General atmospheric moisture content monitoring should be carried out anyway, as datacentres operate best within a specific moisture envelope, but the same systems can be used to monitor for any trend or step change in the moisture content of the facilitys air. For a rapid flood situation such as a river breaching its banks, internal monitoring will not be of much use. Here, a rapid response system such as raising flood barriers should be considered in order to create a bund around the facility to keep the water at bay at least for a time to allow systems to be shut down elegantly and control switched to an alternative facility away from the flood situation. The last area that has moved from being a relatively simple environmental management task to a far more complex one is the management of airflow. With a standard rack-based open data centre, the aim was to maintain a minimum rate of air flow through the complete data centre to maintain an output air temperature that gave an average temperature across the data centre that fell within the desired range. Higher equipment densities, with racks that used to be in the range of 10-15kW now running at up to 35kW and less air space has made simple cooling approaches inefficient. Also, with higher temperatures now being accepted for running individual items of equipment, more targeted cooling has to be used. For example, spinning magnetic disk drives and CPUs will tend to run hot, whereas peripheral chips and switches will tend to run more coolly. Combining this with dynamic load balancing and workload provisioning means that cooling also has to be dynamic and this needs monitoring at a highly granular level. Therefore, the cooling air being used should be targeted as to where it is most needed using ducting and managed airflows. In order to ascertain where the airflows are most needed, thermal monitoring is needed as detailed above. Beyond this, computational fluid dynamics (CFD) software as seen in data centre infrastructure management software such as those from nlyte and Emerson can then help in designing the optimum use of hot and cold aisles, baffles and ducting to ensure that the minimum amount of chilled air creates the maximum amount of useful cooling. Environmental monitoring is more important in managing a data centre facility than it has ever been. Ensuring that temperature, fire, moisture and airflows are all covered is critical. Pulling all of these together in a coordinated and sensible manner will require an overall software and hardware solution built around a DCIM package. However, an investment in DCIM will soon gain payback an environmentally stable data centre will be more energy efficient, and will be able to deal with any problems at an early stage so allowing for greater availability of the technology platform to the business.

Quocirca 2014

-5-

Musings on data centres Volume 2

Oh dear. Our provider has gone bust.


There are lots of reasons why moving to a co-location data centre facility or to a hosted or I/P/SaaS (infrastructure, platform or software as a service) technology model is good for the business. For a starter, the facility is owned by someone else ensuring that it meets requirements and managing the facility is their problem. Then there is predictability a long-term agreement can be put in place with the majority of service providers that covers everything needed (space, connectivity and facility services for co-location; all this and the hardware and even the software for hosted or I/P/SaaS). But what if that predictability is cruelly shattered? There are a range of things that can go wrong. The first one is where a provider exercises a contractual right that you had missed when you signed on the bottom line. Many providers will have clauses that allow them to review your charges when they themselves are faced with a substantive change. Obvious ones here are areas such as if the government changes VAT rates, the tax will be charged accordingly. Not-so-obvious ones could be in areas such as energy pricing if a provider hasnt locked in a long-term energy deal with their energy supplier, it will have left itself open to spot pricing and with energy prices being volatile, it may have to renegotiate its pricing with you which often just means the customer finding this out when a higher than normal bill lands on the desk. The next surprise could be that your provider is acquired. This may make no difference; it could even be good news if the company acquiring them has the money and the resources to improve the services you are receiving or to give better coverage or reach on a global basis. However, it could be also be that the acquiring company is one that you have made an active decision to avoid, for whatever reason. The long-term contract that you had signed could well tie you in to a company that you do not want to be associated with which may lead to bad feeling and a lack of trust between the parties. However, the biggest issue that can hit any organisation using an outside provider is the failure of the outside company. Yesterday, everything seemed OK; today, there just isnt any service and no -one is returning calls. This is the worst possible news so how can you attempt to ameliorate this, along with the other issues? In essence, the basics come down to the contract. In talking with customers who have gone for agreements with large service providers, Quocirca has often heard that the contracts are boilerplated and put all the risk on the customer, with very little being on the provider. This just isnt good enough, and as there is plenty of competition in the market, it is possible to ensure that you get a provider who is willing to enter into sensible negotiations and come up with a bespoke contract that meets both parties needs. As a starting point, it is necessary to see how major changes to the cost base of the provider will be dealt with. If it has not negotiated energy contracts properly, and yet you have a bullet-proof fixed price contract with the provider, this may look good. However, while you continue to pay your fixed price, the provider is going bust, as the margins its business model is predicated upon have been hit. Ensure that the provider has a fixed price energy contract in place that mirrors your contract with it (within reason the provider cannot match every customers contract in this way), and also discuss how any changes in energy price will be dealt with between the two of you. Compromise in advance is always better in these situations than conflict after the event. The contract should also allow you as the customer to review your position and call a halt to the agreement with sufficient notice in the event of a material change to the status of the provider, such as through acquisition. It may be surprising, but the contract should also be the place where the failure of the provider is covered. This is not an area that many providers are keen to discuss, as just talking about it is a tacit agreement that their organisations are just as mortal as anyone elses.

Quocirca 2014

-6-

Musings on data centres Volume 2


However, on failure, all the equipment that was owned by the provider becomes the property of the administrator, whose job is not particularly to run a business, but to optimise the return to the creditors. Unless this involves being able to find a buyer for the business as a going concern, the administrator is unlikely to have any interest in you as a customer whatsoever. For a colocation contract, make sure that it is written into the contract that you can enter and retrieve your equipment at an agreed end of the contract or on the failure of the company. For hosted or I/P/SaaS contracts, make sure that the contract covers that the data is yours, and that the company or its administrators have to allow you access to the data within an agreed timescale. If possible and this will generally only be possible in the larger contracts get a provider to agree that you can supply a network attached storage (NAS) device where your data can be backed up on a regular basis, so that if the provider does go bust, you can turn up in a van and take ownership of your device with all the data on it. This still leaves the problems of moving the data to a new location with an equivalent application or service in place this will have to be the subject of a further article. In essence, the contract is king. Dont just sign anything because it seems to be the done thing. Negotiate from a position of strength and ensure that you go through all the providers clauses with a fine tooth comb. Otherwise, you are just gambling with your organisations future.

Moving data and applications in the cloud


In the last article, Quocirca looked at how a Plan B was required for dealing with the need to move away from a failed relationship with a cloud provider or indeed the failure of the cloud provider itself. The article covered how the contract between your organisation and the cloud provider needed to ensure that your organisation remains the stated owner of the data. However, this only takes things so far: it still leaves the issues of what to do with the data afterward? Ultimately, the response is it depends. The first issue is around the application th at created the data can you still gain access to the same application elsewhere? If the existing agreement was for infrastructure as a service (IaaS) or platform as a service (PaaS), then your organisation will have owned the applications anyway, so re-installing them on a different cloud platform should not be overly problematic. In the case of software as a service (SaaS), however, then there could be bigger problems. If the service being offered was based on a standard application for example, SugarCRM or OpenERP it should be possible to find another service provider hosting the same application. There may be differences in the implementation, but all that should be required is an extract/transformation/load (ETL) action to make sure that the data fits the schema of the implementation that the service provider has in place. Any modifications that you were allowed to make to the application by the previous provider (such as skinning the application with a logo or the addition of any extra functions) will need to be carried out again with the new provider. In many cases, it will not be possible to pull any of the changes from the old provider re-implementing these will be the hardest part of the transition. What it does mean is that any changes that are carried out, even in a SaaS environment, must be documented and stored outside of the SaaS environment a full change log is necessary so that the changes can be re-implemented if a change of provider is needed.

Quocirca 2014

-7-

Musings on data centres Volume 2


The real problems come when you are moving from a provider who has proprietary software in place. This may be a provider who has so heavily modified an open source application as to essentially make it a new application. Or, it could be a SaaS provider who owns the application and does not allow it to be offered by any other cloud provider on their own platform, such as salesforce.com. However, whereas salesforce.com is unlikely to hit the buffers anytime soon, some of the smaller dedicated SaaS providers are bound to fail, just through the law of averages. Quocirca recommends that the original choice of SaaS provider takes this risk into account. If you havent already adopted SaaS, then ensure that the risk of the provider going bust is assessed, and what effort would be required to take data from their system and get it into a form that is useable by another system over a short period of time. For those who have already made the move to a SaaS provider it is now time to make sure that the Plan B for getting to a known recovery point objective (RPO) and to a known recovery time objective (RTO) is in place. The first thing needed is to identify what the target application would be. Quocirca advises that this should be either a widely adopted application within SaaS providers, or an application from a very large and hopefully more financially secure proprietary SaaS provider. Next is the need to identify the schemas used by both systems. Matching field names and types is necessary here to make sure that fidelity of information is maintained when the data is moved across. This will also define the ETL activity that will be required to be carried out. Then, there is the necessity of testing. It cannot be left to chance and hope that such an activity will work. You will need to carry out a test by taking data from the existing environment and moving it to the new environment. This doesnt have to be based on a permanent contract with the second provider it is just a test to make sure it works. Based on the test being successful, you can create a full, formalised plan as to what your organisation needs to do should the worst come about. This should also include indications of how long such activities are expected to take and the plans around how the business will continue to operate during this down time. This may well involve falling back on manual processes and any data that is gathered during these manual processes will need to be input into the new system as it comes on line. The last area that should be covered by the contract is that the service provider has to securely wipe your organisations data from their systems something that is more often than not overlooked. Hopefully, as time progresses and cloud standards bed down, it will be possible to movements of applications and data out on standardised cloud platforms, such as OpenStack, and the whole activity should be a great deal more seamless and easy. There are also commercial systems coming to market that may make life easier. For example, Vision Solutions Double-Take Move can be used to migrate data from one cloud provider to another even where the source provider refuses to co-operate. Quocirca expects other similar services to come to market over time but for many, a solid plan for dealing with a migration from the ground up will be needed. Cloud providers are no different to any other commercial entity. There will be failures along the way, and this is no reflection on cloud as a model for implementing an IT platform. The problem is that any failure of a cloud provider will hit more organisations, as they are, by definition multi-tenanted platforms. Hopefully, these last two articles have provided the kind of information required to make sure that your organisation can plan to minimise the impact of any such failure whether it be due to a breakdown in relationship or the complete failure of the cloud provider.

Quocirca 2014

-8-

Musings on data centres Volume 2

The Software Defined Data Centre is it all rosy?


A while back, VMware coined a term for what it saw as the next generation data centre the software defined datacentre, or SDDC. Hot on the heels of software defined networking (SDN) and just before software defined storage (SDS), SDDC looks like it heralds a new way of dealing with technology hardware as a platform and the applications that run on top of it. From a positive point of view, there has to be abstraction between the hardware and the applications in a virtualised environment. If there isnt, then virtualisation cannot provide on its promise; there will still be dep endencies between the application and the physical hardware. Certainly, cloud cannot be implemented in any shape or form without such an abstraction in place. SDDC should enable this to be the case. All the smarts of putting together composite applicat ions, rolling them out, monitoring them and fixing any problems needs to be carried out above the physical platform itself. For many organisations, this will be based on bringing together existing solutions from the physical environment to help with new capabilities in the virtual through development or acquisition. For example, CA has recently acquired Layer 7 Technologies and Nolio which will help build on previous acquisitions of 3Tera and Nimsoft that helped with filling gaps in Unicenter and Clarity. IBM continues to acquire companies that will help it build on its Tivoli systems management platform. Others will take a virtual down approach, such as VMware itself with its vCloud Suite and the ecosystem of other software around it. However, this does lead to issues that may not be apparent at the outset. As a given within a software defined data centre, the physical and virtual assets have to be managed as a single entity. It is no use having one set of tools flagging up that there is a problem in the virtual environment if the problem is down to a physical fault, yet the tools cannot identify that fault for you. If the two sets of tools for physical and virtual are not fully integrated, then it is no use. The stage that standards are at can also be a problem. SDDC was first mentioned by VMware, and it has a vested interest in making sure that what it puts out as SDDC software supports ESX and ESXi more than any other hypervisor. Others will be trying to be as agnostic as possible, supporting as many different environments as possible but may be hampered by the different levels of functionality available on different platforms, as ESX, Hyper-V, Xen and other hypervisors continue to mature. These problems may not just be down to working across multiple hypervisors. Take a modern system a vBlock, UCS or vStart system. These are highly engineered systems that have a lot of built-in management capability. Any overarching SDDC system will have to be able to deal with these systems or bypass them completely, which negates a lot of the value of going with an engineered solution in the first place. Many vendors are looking at how graphics processing units (GPUs) can be used within their systems as well, allowing certain types of workload to be offloaded to a platform that is more suited to their needs. Others, such as Azul Systems, have highly specific silicon that can be used to run Java-based workloads natively. Is it possible to expect a single SDDC system to be able to embrace all of these different platforms?

Quocirca 2014

-9-

Musings on data centres Volume 2


Then, theres what IBM is up to with its PureFlex systems. Here, not only is there an engineered system, but there are also multiple different types of silicon x86 and Power as the main constituents, but this could expand to include mainframe CPUs. IBM runs its own intelligent workload manager within PureFlex any SDDC system will need to be able to operate alongside such capabilities. The impact of the other SDxs also has to be addressed. In an ideal situation, an SDDC system w ould cover everything; CPUs, storage and networks. However, SDN is ploughing its own furrow and will evolve at a different speed to aspects of SDDC. SDS is, at the moment, a niche for a few small storage players, but is likely to grow in importance as groups such as the Storage Networking Industry Association (SNIA) brings out more standards around cloud storage. The big issue for SDDC is not that it is a bad idea. In fact, it is a brilliant idea that is a necessity to make cloud work the way it should. No, the problem is more down to there being far too many agendas out there in vendor-land and that these vendors (as always) see little value in their adopting and abiding by any standards 100%. Indeed, as with so many standards in the past, expect to see announcements from vendors that say they will support and adopt SDDC and then add extra functionality that only works with their systems. Eventually, it is likely that SDDC will work its way out as a super-set of management and implementation functions that vendors will write to, providing greater functionality through their own tools that will plug in to any system that is there to provide oversight of the total system. It may not be the most elegant technical solution but it should work and meet the main needs of vendors and customers alike.

Quocirca 2014

- 10 -

Musings on data centres Volume 2

Sweating IT assets is it a good idea?


Times continue to be hard in an uncertain financial climate. It is natural for organisations to want to get more from existing assets, no matter what they are. But at an IT level, does sweating assets for longer periods of time actually make sense? Most organisations will have a straight-line depreciation model on equipment over a three to five year period. So, if a piece of equipment cost 10,000 to buy and was being depreciated over a 5 year period in a straight-line model, it would have a book value of 8,000 after year 1, 6,000 at the end of year 2, 4,000 at end of year 3, 2,000 at the end of year 4 and would be written off at the end of year 5. This bears no relation to the actual intrinsic value of the equipment, however and even less to the business value of it. The interactions between the various values of a piece of IT equipment can be seen on the following diagram:

Figure 1

A piece of brand new equipment may have lost up to a third of its intrinsic value by the time it arrives on site. The intrinsic value is what someone would pay for it on the open market brand new equipment has a higher value than second hand, even where the equipment has never even been out of its box. However, over a period of time, the rate of depreciation of an item of equipments intrinsic value levels off even a 20-year-old piece of equipment will tend to have some intrinsic value somewhere in it. In fact, wait long enough and it will become a collectors item and the intrinsic value will rise again but this is unlikely to happen with standard commercial equipment found in most data centres. The business value is more complex. Here, the value is based on how well the equipment is doing what the business needs: is it supporting application workloads that facilitate processes adequately; is it managing to deal with data requests and analysis in a rapid enough manner? The brand new equipment has no business value until it is

Quocirca 2014

- 11 -

Musings on data centres Volume 2


provisioned and is supporting business workloads. It will build its business value up through creating a mix of workload and data management. However, at some stage, the capabilities of the equipment will begin to fall behind the curve newer equipment will be so much better at supporting the workloads, such that this piece of equipment is no longer putting the organisation at the forefront of technology: it is now become a constraint on how well the business can operate. However, the data that is being created by the applications that are supported by the equipment continues to grow its just that the equipment is preventing the business from gaining the true value from the data. And this is where sweating assets can become a problem. Lets take that 10,000 item as it was before, but sweat it for an extra two years a lifetime of use of seven years. The book depreciation doesnt change so you now have a physical piece of equipment with zero book value for two years, which is good, as it doesnt appear on the books, but it is doing something. Its intrinsic value is pretty low, but is still something, so you have something that should it be sold would actually be a net positive to the bottom line the equipment has some value; the books say it has none. The data value is continuing to increase, as the applications dependent on the equipment are still producing increasing volumes of it. However, the business value has gone beyond its peak and is beginning to fall away, and by year seven, may well be plummeting towards negative values. The business cannot gain the speed of access to visibility of value from the data on the equipment IT is no longer seen as a facilitator: it is a brake on the organisations perfor mance. So, sweating the asset has a good look, initially a book value of zero; an intrinsic value of something but a business value that is bad. For IT to be seen as the good guys, sweating assets has to be something to keep away from. The key is to identify the point at which the intrinsic value of the equipment is still high enough for the equipment to be sold off and the money obtained used to invest in more modern equipment. This takes us far beyond IT asset management (ITAM) and whether an asset should or should not be sweated it gets us into how a platform should be continually optimised to meet the needs of the business. This is IT lifecycle management (ITLM) and is, in Quocircas view, a strategic direction well worth investigating. ITLM can lead to equipment being replaced on a 2-3 year basis, rather than a 5-7 year one. The ongoing IT hardware costs can, and should, be amortised over the periods through financing and it is often better to go to an outside organisation to manage the ITLM process through from initial purchasing through to deprovisioning and secure disposal of old equipment through sale or destruction. Maintenance costs of the total IT estate will be far lower with equipment being more modern, there will be fewer component failures, and what does fail should be covered under vendor guarantee. Energy usage should be optimised modern equipment has shown over long periods to be more energy efficient year-on-year than what has gone before. Full-service ITLM should not be looked at as just an option it should be looked at as the future of provisioning and managing IT platforms. The value for IT and, more importantly, the business are strong. Quocirca has three papers that cover ITLM and IT financing that are available for free download here: http://www.quocirca.com/reports/786/using-ict-financing-for-strategic-gain http://www.quocirca.com/reports/682/dont-sweat-assets--liberate-them http://www.quocirca.com/reports/740/de-risking-it-lifecycle-management

Quocirca 2014

- 12 -

Musings on data centres Volume 2

Managing the software assets of an organisation.


In the last article, we looked at how the intelligent management of hardware in the data centre can lead to avoiding constraining the business through any attempts to sweat assets. What was not covered was how the software assets should also be managed in order to maintain a highly flexible and up-to-date IT platform. Software asset management (SAM) tools have evolved into complex systems using them correctly can ensure that an organisation maintains its software assets to better meet the needs of the organisation and keep itself within the terms of all of its software contracts. As with hardware, the first step that needs to be carried out is a full asset discovery. The majority of systems management tools with include something along these lines, but the capabilities of the systems may not provide exactly what is needed. For example, some systems will only look within the data centre. While this may be useful in identifying the number of licences being used across what software, this is only touching a small proportion of the applications and licences an organisation has. Trawling across an organisations total estate of servers, desktops, laptops, tablets and other mobile devices has become a difficult task, particularly with bring your own device (BYOD) devices where licences may have been bought directly by the end user. However, building up a full picture of what is being used brings in many different advantages. Firstly, patterns of usage can be built up. For example, is a specific group of employees using a specific application? Are employees carrying out the same tasks using different applications? Secondly, the issue of licensing can be addressed. Once a full picture of the application landscape has been established, how these have been licensed can be looked at. In many cases, organisations will find that they have a corporate agreement for licences, yet departments and individuals have gone out and sourced their own software with licences costing much more than if they had gone through a central purchasing capability. Bringing these licences into the central system could save a lot of money. However, it may well be that golden images have been used with corporate licences without an y checks in place as to whether the number of seats implemented is within the agreed contract. Although this can lead to extra costs for the organisation in bringing the contract into line with the number of licences being used, it will be cheaper than being fined should a software audit be carried out by the likes of FAST. The usage patterns identified may help here as well. Many SAM tools will be able to report on when an application was last used by the employee. In some cases, this may have been weeks or even months ago in many cases, it will be apparent that the employee installed the software to try out and has never used it since. Harvesting these unused licences can help to offset the need to change existing contracts. Thirdly, as long as the asset discovery tool is granular enough, it will be able to ascertain the status of the application its version level, what patches have been applied and so on. This allows IT to bring applications fully up to the latest version and that patches have been applied where necessary. When combined with a good hardware asset management system, the overall hardware can be interrogated to ensure that it is capable of taking the software upgrades. Where this is not possible, the machine can be upgraded or replaced as necessary, or marked as a special case with the software remaining as is when further software updates are scheduled to run. Good SAM tools should also be able to map the dependencies between software, tracking how a business process will use different applications as it progresses. Again, through the use of suitable rules engines, these dependencies can

Quocirca 2014

- 13 -

Musings on data centres Volume 2


be managed, such that the updating of one application does not cause a whole business process to fail. Also, possible problematic areas can be identified for example where software has a dependency on the use of IE6 and so introduce security loopholes that could be exploited by hackers. For most organisations, the main strength of SAM tools will, however, reside in the capability to manage software licences against agreed contracts. In many cases, this is not just a case of counting licences and comparing them against how many are allowed to be used. The domain expertise built up by vendors such as Flexera and Snow Software means that the nuances of contracts can be used to the best advantage. For example, through identifying all licences that are currently in place and the usage patterns around them, it may be possible to use concurrent licencing rather than per seat licencing. Here, licences can be allocated based on the number of people using an application, rather than on named seats, so bringing down the number of licences required considerably. If this is not an option, then it may be that by bringing all licences under one agreement, rather than several, a point may be reached where a new discount level is hit. For an organisation with, say 1,000 seats and 600 licences under one agreement, and four other contracts for 100 seats each, hitting the 1,000 seats under one contract may optimise the costs considerably, not just on licences but also on maintenance. For international organisations, this can be exceedingly valid bringing licence agreements from several countries together under a single international contract could help save large amounts of money, as well as the time taken in managing the various contracts. When looking at SAM tools, there is one area where Quocirca recommends caution. There is a strong move that has started away from perpetual or yearly licences plus maintenance towards subscriptions, as cloud computing pushes all software vendors to review how they market their products and attempt to maintain revenue streams. In many cases, a perpetual licence will allow a user to continue using the software even if no maintenance is paid from there on. Increasingly, subscription models will include some automated governance of the software if the subscription is not paid, then access to the software will be automatically blocked. SAM systems will need to be able to look more to the future and advise when subscription renewals are coming up and also to provide end-user self-service capabilities to gain access to external subscription-based services through agreed corporate policies. Quocirca recommends strongly that an organisation ensures that its SAM partner of choice is already prepared for this and is working continuously to maintain its domain expertise in a manner that allows the organisation to move to a subscription model as and when this makes sense. Indeed, Quocirca expects to see more SAM systems come through in the market which will be able to help an organisation in identifying when this sweet spot is reached, providing helpful information on what options an organisation should be considering to optimise its software asset base. Overall, SAM is greater than the data centre. However, by rationalising client licenses, a better picture of server requirements in the data centre can be built up. Only through a full SAM approach can the data centre be fully optimised for the business.

Quocirca 2014

- 14 -

Musings on data centres Volume 2

How to check the health of your data centre


Your sysadmins are looking at their screens, and all seems well. A sea of green denotes that every monitored system is performing optimally, and the sysadmins thoughts are turning to the weekend and a 48 hour splurge of on -line gaming or code debugging. Meanwhile, on the help desk, the screams of anguished users can be heard, bemoaning the fact that their productivity has plummeted - due to poor access to or poor performance of the IT platform. Somehow, there has been a serious disconnect between the sysadmins metrics and the users experience. From ITs point of view, everything is running well but are they looking at the wrong things or, perhaps, not looking at enough of the right things. For the end-user, identifying blame is easy: its ITs fault but that is usually a gross and often unjust simplification. Just how healthy is your data centre? Is it doing what the business requires of it, or is it part of the problem? To which the key question is What do you need to measure? To check the data centres health, there is a need for multiple levels of measurement, right from the highly granular, equipment-based monitoring and reporting through to the outside-in monitoring and reporting from the end-users viewpoint. At the data centre IT equipment level, monitoring its state and performance is no longer enough. Being reactive to problems is storing up issues an N+1 redundancy approach (having one more item of equipment than is truly needed) turns into an N approach if an item fails and if a second one then fails while the first is being fixed, disaster happens. Far better to use a predictive approach monitor things like the temperature of key components such as cpus and disk drives; monitor power draw to see if this suddenly and unexpectedly alters or is trending upwards, and replace the system before it fails. Understand that during replacement, an N+1 approach is no longer providing any redundancy so either go for a N+2 (or greater) approach, or ensure that key component are easily accessible so that replacement can be carried out rapidly so as to minimise the time where redundancy is not in place. Next is the environmental health of the data centre. The use of monitors for overall temperature, smoke and humidity, along with infra-red heat sensors will allow problems to be detected before they become issues. By linking these to the equipment monitoring systems, the presence of a hot spot indicated by an infra-red sensor should be able to be linked to a specific piece of equipment which can be swapped out or shut down to prevent the problem getting out of hand. The broader facility and its equipment also needs covering. Whereas facilities management may be using a building information modelling (BIM) tool, this will generally not be integrat ed into ITs systems management tools. The use of a data centre infrastructure management (DCIM) suite may pull everything together, but however it is implemented, the health of the facilities power distribution, uninterruptable power supplies (UPSs), auxiliary generators and cooling systems have to be linked in to the overall view of how the data centre is performing. Through using modular systems throughout a data centre, from the IT equipment to the facility support equipment, failures of individual pieces of equipment can be allowed for. Where possible, use load balancing capabilities (for example, using intelligent virtualisation of servers, storage and networking equipment and intelligent workload management modes in UPSs and generators) to provide the maximum levels of business continuity. Load balancing will provide much higher levels of availability than a direct N+1 approach, as the failure of even two or more items can still be dealt with even if application performance is hit.

Quocirca 2014

- 15 -

Musings on data centres Volume 2


But assessing IT health is not just an IT question it is a business question. The above discussions deal with the data centre itself and for many organisations that may already have the above in place, it may be seen as being enough. The problem from Quocircas view is that the screens that the sysadmins are looking at are generally part of that data centre environment, and are connected to the systems through data centre networks at data centre speeds. Hardly surprising that everything looks as if it is working well while the help desk goes into melt down. The data centre is connected to the rest of the organisation through local and wide area networks (LANs and WANs). The users access the data centre through these networks. If there are problems anywhere along these connections, the user will have a poor experience and will phone the help desk, often with the perception that it is an application or data centre problem, rather than a network one. It is incumbent on the data centre manager, therefore, to be able to monitor connectivity across the different types of network that can be monitored. However, many of todays workers will be mobile or working from home and will therefore be accessing the data centre services through public connectivity ADSL lines or WiFi or mobile wireless. Being able to measure network performance across these less predictable connections can be problematic but providing tools to the help desk in order for them to ping the users device and see if latency, jitter or packet loss could be causing issues will help in effectively identifying the root cause of any issue. The final area is around the end users device itself. A PC may have a disk drive that is full; a tablet may have a process that has hung at 100% cpu utilisation; a virus may be impacting overall performance. Putting in place tools that enable the end device to be monitored and fixed automatically wherever possible, or through efficient and effective means via the help desk where automation cannot be used will again make root cause analysis easier. The use of human mean opinion scoring (MOS) systems can also help. Rather than depend on technical measurement of the performance of systems where comparing a pseudo-transactions performance against an old service level agreement (SLA) and getting a green signal, asking real users as to their experience will be far more illuminating. If users find performance to be too slow, it is no use pointing to the SLA and saying that it is within agreed limits if the perception is that it is too slow, then it is down to IT to see if performance can be improved. Monitoring the health of the data centre is like monitoring the health of a person: focusing on just one area can lead to missing out where the real issue is and so in failing to treat the problem. Determining the set of measurements that will provide an accurate assessment of data centre health in terms of business performance requires an holistic approach. Any data centre manager looking to fully support the business should take an end-to-end systems management approach, so ensuring that the organisation is working against a healthy IT platform not just a healthy data centre.

Quocirca 2014

- 16 -

Musings on data centres Volume 2

The BYOD-proof datacentre.


Bring your own device (BYOD) is changing the way that end users work, and the knock-on impact on how the organisation has to operate cannot be underestimated. BYOD is not just a case of employees, consultants and contractors having a range of devices with different capabilities, operating systems and applications in place it is far more complex than that. The main problem is that BYOD leads an individual to the perception that they can now choose how they want to work with tools of their choice. While this has been possible in the past, the growth of shadow IT, where departments often decided to go their own way on certain projects was relatively easily identified by central IT using basic systems management tools. These departmental IT systems could then be brought under the wing of IT and managed accordingly. Central IT finds it a bit more difficult to manage systems that are completely outside the organisation. Dropbox, Skydrive, Box, SugarSync and other file sharing systems may have extended their capabilities to include team working, but there are still many employees who will still have individual accounts. These accounts or even group ones funded and run by departments still attempting to carry out shadow IT activities will contain information that is corporate. However, it is outside of the organisations reach it cannot be accessed to report against or to bring into the mix to aid decision making. It may be corporate data, but it is not useful to the organisation in fact it is counting against the organisation being effective. Decisions are being made against the actual available information, whereas it should be made against the total required information and these two measures are increasingly drifting apart. What can be done? IT has to look at a different approach. It has to look at the data centre and figure out how this can be used to centralise corporate information which will, in many cases involve the centralisation of how an end user accesses and uses their corporate IT platform. The first thing is to carry out a full audit of what your current IT estate consists of. This is not just a hardware exercise software has to be included as well. On the hardware side, vendors such as ManageEngine, Altiris, LANDesk as well as IBM, Dell and CA can provide help. On the software side, Centrix Software, Snow, Rimo3 and Flexera are options to help identify not only what is out there, but in some cases how it is being used, helping to identify where licence savings can be made. Next is to create a centralised workplace for users that will still give an adequate user experience. A virtual desktop infrastructure (VDI) may be a good starting point, but centralising everything may not lead to an overall optimised environment with good enough performance for users. Streaming of software can be carried out Citrix and VMware have capabilities in this space, and Numecents cloud paging approach is showing great promise in using the power of the end device alongside intelligently streamed applications to give rapid start up with native performance. Such an approach of providing a flexible hybrid server-side computing model lays the foundation for a more controlled corporate information strategy. Next comes the need to be able to rationalise what users access via their devices. The use of sandboxing of the device (available via Centrix Software and RES Software amongst others) enables a corporate environment to be created where users cannot cut and paste information to or from their own consumer environment, plus viruses and other malware cannot cross over. Identification of what apps are being used by the users needs to be done. This can be more difficult and many organisations have taken an approach of proscribing what cannot be used, such as Dropbox. However, Quocirca continually finds that such proscription does not work unless there are adequate tools in place to stop this from

Quocirca 2014

- 17 -

Musings on data centres Volume 2


happening. As a BYOD device on the public internet using cloud-based apps does not necessarily have to touch the corporate network, this is almost impossible. However, a touchpoint can be created through the use of server-side computing accessed via a virtual private network (VPN). Any activity carried out which is corporate is then capable of being captured as it crosses over these touchpoints, and data being created during the activities can be redirected through to central stores, through the use of tools such as CommVault Simpana. To bring users in to a complete corporate environment and away from the use of consumer tools on their BYOD device requires a little more work. Once app usage has been identified which may involve sitting down with users and asking them what apps they do use, along with why they use them IT can then create a list of functions that are best served via native apps on the devices. It is then incumbent on IT to identify equivalent apps that will provide the experience the individual wants, with the information control the organisation requires. This means coming up with a list of apps that are preferred. For example, rather than users using Dropbo x, provide them with Citrix FileShare; rather than using SkyDrive, move them through to Office 365 and an enterprise subscription to SkyDrive Pro. These options need to be made available to the users through an enterprise apps store available from within their enterprise sandbox on their device, which will require a system within the data centre that can run such a portal with links through to the apps in the general app store for their device. Users will also need to be made aware as to why they are being requested to use these apps. The main reasons should build on the fact that they are not just an individual they are members of one or more teams that make up the overall organisation. The information they create and work on is of value to the organisation only if the organisation can access it. Providing them with the preferred tools enables the organisation to access and make decisions against all data across the organisation, which will result in a more effective and successful company. For the organisation, a data centre that is built for BYOD should also build in much better data security, as information assets can be secured in one place. As employees, contractors and consultants come and go, they can be more easily enabled to use or blocked from using the data assets. Sandboxing means that although the device is the users, that part of the device that has been assigned to the organisation can be deleted, ensuring that information remains secure. The role of the data centre will continue to change. Abdication of the role as the centre for managing an organisations information assets may well lead to the data centre being seen as less of a necessity and everything being moved out to the cloud.

Quocirca 2014

- 18 -

Musings on data centres Volume 2

Is your datacentre fit for high performance computing and high availability?
The old view of high performance computing (HPC) was of highly specialised computers. These could range from specialist generalist computers (i.e. systems that were produced in significant numbers for focused workload s), such as CISC-based IBMs RS600 or HPs PA -RISC systems used in engineering and design systems through to bespoke supercomputers based on highly specialised silicon for massive number crunching. With high availability (HA), complex clusters of servers with dedicated software monitoring the heartbeat of the systems and managing failover to redundant systems was the norm. Many organisations were proud to have general systems running at above 99% availability: the aim was seen as the five nines five of 99.9995% availability, or two and a half minutes per year of unplanned downtime. Pulling the two together was very expensive and few organisations could afford highly available, high-performance systems. HPC and HA should now be part of the inherent design of systems new technical approaches such as big data and cloud computing require technology platforms that can not only meet the resource levels that are required by the workloads, but can adapt resources in real time to meet changes in the workl oads needs and do all of this on a continuous basis with little to no planned or unplanned downtime. To create an HA HPC datacentre is now more of an achievable aim than ever. Virtualisation and cloud computing provide the basic means to gain good levels of availability. By intelligently virtualising the three main resource pools of compute, storage and network, the failure of any single item should have almost no impact on the running of the rest of the platform. Almost no impact is the key here if the virtualised pool consists of, say 100 items, a single failure is only 1% of the total and the performance impact should be minimal. However, if the virtualised pool is only 10 items, then the hit will be 10%. Note also that although the use of virtualisation provides a better level of inherent availability, it is not a universal panacea. Virtual images of applications, virtual storage pools and virtual network paths are still dependent on the physical resources assigned to them, and the data centre design must take this into account. If the server running the virtual image fails, then it will still be necessary to spin up a new image elsewhere on the physical server system and reassign connections. With the right software in place such as VMwares vSphere HA, Veeam Backup & Replication or Vision Solutions Double-Take recovery from such failure can be automated and the impact to the organisation minimised, with end users often not being aware of any break to their service. At the storage level, mirroring of live data will be required. For true HA, this will need to be a live-live real-time synchronised approach, but near-real time can be achieved through the use of snapshots where on the failure of a storage array, a new array is rapidly spun up and made available based on copies of the live data having been previously made at regular intervals. The majority of storage vendors offer different forms of storage HA, with EMC, NetApp, Dell, IBM, HDS and HP all having enhanced versions of their general storage offerings with extra HA capabilities. With networking, the move towards fabric networks is helping to provide greater HA. Hierarchical network topologies had very basic HA capabilities based on the fact that they were best-effort constructs, rather than defined point-to-point connections. However, this also meant that they were slow, and on failure could take long periods of time to reconfigure to a point where performance was regained to any extent.

Quocirca 2014

- 19 -

Musings on data centres Volume 2


Fabric networks collapse the network down to fewer levels, and provide a more dynamic means of reconfiguring the network should any item fail. In all the above cases, the key is to still have more resources than are actually required in an n+1 (one more item of equipment than is anticipated as being needed) or n+m (multiple more items than needed) architecture. For true single facility HA, this has also got to spread to the design of the data centre itself power management and distribution, cooling and power backup all need to be reviewed. Wherever possible, a modular approach should replace the monolithic for example, UPS systems should not be bought as single units, where any failure could bring the whole system down. Modular architectures from the likes of Eaton or Emerson allow for component failure while maintaining capability through load balancing and the capacity to replace modules while the system is still live. However, the ultimate in HA can only be obtained through the complete mirroring of the whole architecture across facilities. This requires heavy investment, and full monitoring of transaction and replications streams so that on the failure of any component leading to the need to change over to the mirrored facility, a full knowledge of transactions that cannot be fully recovered is maintained, so that systems do not become corrupted through partial records being logged. For HPC, many see the use of a scale out architecture as being the solution. Here, to gain greater performance, all that is done is that more resource is thrown into the pool. Although this will work in certain circumstances, it is not the answer to all workloads. For example, the mainframe is still a better platform for many on-line transactional processing (OLTP) workloads, IBMs Power platform can deal with certain types of number crunching in a more effective manner than Intel or AMD CPUs are able to. For an HA HPC platform, design will still be key. Start from what the organisation really needs in the way of HA and HPC and design accordingly make sure that the costs of each approach are made apparent to the business so that it can make the final decision. You may be surprised that what started out as a solid need for a 100% available platform suddenly becomes a 99.999% or less need and that very small decrease in availability can mean the difference of millions of pounds in approach if it transpires that a live-live mirror across multiple data centres is not really required.

Data centre highlights of 2013


Its been an interesting year for the data centre. The increasing interest in co-location, hosting and cloud has led to murmurings around the death of the in-house data centre, while hardware and software vendors have tried to adapt to a world that is no longer as predictable as it was. Certainly, the co-location vendors, such as Interxion, Equinix and Telecity have been having a good time, yet even they realise that they cannot rest on their laurels. Differentiating themselves from the crowd is key, and the co-location companies are looking at how to become cloud brokers, helping their customers identify who else within a given facility may be able to provide functions and services at internal data centre interconnect speeds, rather than at wide area network (WAN) interconnect speeds. Quocirca believes that such services will help to define a new market during 2014 and the winners will become centres of gravity for new software players in the market who sees such a model as fitting in alongside its standard on-premise and/or cloud offerings. This will then lead to the likes of Rackspace, Savvis and other hosting companies, along with the more proprietary cloud players such as AWS and Google to up their efforts in providing marketplaces for customers to gain access to self-service functionality and applications in the same manner or at least in a way that minimises latency and optimises rasponse across their multiple data centres.

Quocirca 2014

- 20 -

Musings on data centres Volume 2


Certainly, throughout 2014 and beyond, the market for these new service brokers will increase. However, the data centre owned and managed by the organisation is nowhere near dead. The way that a data centre needs to be built, provisioned and managed is certainly changing, though. In the past, data centre planning was mainly about when it would need to be expanded. However, many data centres are now shrinking as more workloads move into external facilities and virtualisation and increasing equipment densities push down capacity requirements. This introduces a lot of issues, particularly around the equipment that is crucial to a data centres operation, but tends to be overlooked by IT. Facilities management may have control of equipment such as UPSs, cooling systems and auxiliary generators, and these have tended to be implemented as monolithic systems. This may be OK when the only future is expansion, as growth can be by the incremental addition of smaller units. However, when shrinkage is required, this either means running with excess capacity on these systems (and so ruining any power utilisation effectiveness (PUE) scores) or in a complete, expensive replacement of the systems to better reflect reduced requirements. Therefore, we have seen a move to replacing these systems with more modular systems as they come up for replacement. Indeed, many organisations are identifying the sweet spot where the cost of trying to sweat the benefit of the item of equipment is outweighed by the cost of running it in an ineffective manner and are replacing these systems be more efficient. This is also carrying through to the IT equipment, where IT teams are realising that trying to plan an IT platform for a period of many years may no longer be an effective way of meeting the needs of the organisation. Quocirca is seeing more organisations take a lifecycle management approach to their hardware, replacing items on a basis that attempts to maintain the most cost effective beneficial platform for the business, rather than just the cheapest. Here, organisations such as Bell Microsystems provide full services from acquisition through implementation and management to swap out and secure disposal. The disposal of old equipment has also matured as legal aspects such as the Waste Electronic and Electrical Equipment (WEEE) laws tighten. As well as Bell, EcoSystems IT provides a complete secure disposal service where it recycles as much as it possibly can generally at zero net cost to the organisation requiring the service. The new data centre equipment has also tended to move away from self-provisioned racks of equipment to a more converged infrastructure of pre-engineered compute, storage and network systems, such as Ciscos UCS, IBMs Pure Data and Dells VRTX and Active Systems platforms. This may be seen as a heavy expense for organisations struggling to raise finance in todays economic climate. For those struggling to raise the required funds to move their existing data centre towards a modern platform architecture within their data centre, options from the likes of BNP Paribas Leasing Solutions can enable IT funds to be aggregated across many years and made available to cover the hardware and software acquisition costs for larger projects with repayments being made across a broader period of time, without the risks often associated with tying IT expenditure into bank or other finance loans, where the business may need to be put at risk against repayments. BNP Paribas only takes a lien against the equipment, so minimising the business risk. One of the biggest changes in the data centre, however has been driven by the increased use of virtualisation. Virtualisation has now become mainstream with many organisations now having at least a good number of their servers fully virtualised. Making effective use of a virtualised platform has led to a need for more complex and effective software to manage the whole environment and for an abstraction of the management from the underlying hardware wherever possible.

Quocirca 2014

- 21 -

Musings on data centres Volume 2


To start with, 2013 saw the emergence of management software that concentrated on the virtual layer but neglected to understand that there are dependencies between the virtual and physical worlds. Now, Quocirca is seeing the emergence of management systems from the likes of CA, IBM and Dell that provide a more holistic view of what is happening, ensuring that issues at the physical layer do not have an overly adverse impact on the virtual systems. This has then led to the emergence of the software defined world. Starting with software defined networking, it was realised that in a fully virtualised world where functions and processes would need to cross over different platforms and systems, a dependence on proprietary systems would become an increasing hindrance to how well an IT platform could serve the business. Now, alongside software defined networking (SDN), we have seen software defined computing (SDC), software defined storage (SDS) and several other SDxs emerge. EMC has coined the term the software defined data centre (SDDC) to try and show how all the SDxs will need to be brought together through an over-reaching management capability. For data centre managers, 2014 will therefore need to see the emergence of the software defined facility (SDF), bringing the areas that facilities management tend to have control over. Here, look to the data centre infrastructure management (DCIM) vendors such as nlyte, Emerson Network Power, CA and Raritan to push inclusive systems that will embrace and work with existing SDx software from other vendors. All told, 2013 has been a period of dynamic change for data centre managers. However, it has really only been laying the foundations for 2014, and Quocirca expects to see the continued move to a hybrid mix of existing physical platforms alongside private and public clouds across a multiple data centre facilities drive further innovation in the market.

Quocirca 2014

- 22 -

REPORT NOTE: This report has been written independently by Quocirca Ltd to provide an overview of the issues facing organisations seeking to maximise the effectiveness of todays dynamic workforce. The report draws on Quocircas extensive knowledge of the technology and business arenas, and provides advice on the approach that organisations should take to create a more effective and efficient environment for future growth.

About Quocirca
Quocirca is a primary research and analysis company specialising in the business impact of information technology and communications (ITC). With world-wide, native language reach, Quocirca provides in-depth insights into the views of buyers and influencers in large, mid-sized and small organisations. Its analyst team is made up of real-world practitioners with first-hand experience of ITC delivery who continuously research and track the industry and its real usage in the markets. Through researching perceptions, Quocirca uncovers the real hurdles to technology adoption the personal and political aspects of an organisations environment and the pressures of the need for demonstrable business value in any implementation. This capability to uncover and report back on the end-user perceptions in the market enables Quocirca to provide advice on the realities of technology adoption, not the promises.

Quocirca research is always pragmatic, business orientated and conducted in the context of the bigger picture. ITC has the ability to transform businesses and the processes that drive them, but often fails to do so. Quocircas mission is to help organisations improve their success rate in process enablement through better levels of understanding and the adoption of the correct technologies at the correct time. Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITC products and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture of long term investment trends, providing invaluable information for the whole of the ITC community. Quocirca works with global and local providers of ITC products and services to help them deliver on the promise that ITC holds for business. Quocircas clients include Oracle, IBM, CA, O2, T -Mobile, HP, Xerox, Ricoh and Symantec, along with other large and medium sized vendors, service providers and more specialist firms. Details of Quocircas work and the services it offers can be found at http://www.quocirca.com Disclaimer: This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca may have used a number of sources for the information and views provided. Although Quocirca has attempted wherever possible to validate the information received from each vendor, Quocirca cannot be held responsible for any errors in information received in this manner. Although Quocirca has taken what steps it can to ensure that the information provided in this report is true and reflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the details presented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presented here, including any and all consequential losses incurred by any organisation or individual taking any action based on such data and advice. All brand and product names are recognised and acknowledged as trademarks or service marks of their respective holders.

Das könnte Ihnen auch gefallen