Beruflich Dokumente
Kultur Dokumente
This report brings together a series of articles first published through ComputerWeekly
January 2014
2013 Continued the era of big changes for data centres. Co-location services continued to increase; cloud computing became more of a reality; software defined was applied to anything and everything. Keeping up to speed with the changes is difficult at the best of times and that is why Quocirca has pulled together a set of articles written by Quocirca for ComputerWeekly throughout 2013 as a single report.
Clive Longbottom Quocirca Ltd Tel : +44 118 9483360 Email: Clive.Longbottom@Quocirca.com
While IT equipment is covered by many vendors and systems management specialists, the implementation, monitoring and management of the environmental situation within a data centre is often less of a priority. This article looks at the various areas and what to watch out for.
The cloud seems like a good idea. Just like hosting did. Or the application service provider (ASP) market. However, a Plan B has to be in place to cover what your organisation has to do if your provider goes bust. If you do go to the cloud, you will want as much flexibility as you can possibly get. For this, you must understand what can, and what cannot, be done when it comes to moving data and applications around in the cloud. Software defined environments are all the rage. Networks, servers and storage each have their own SDx moniker, and now the software defined data centre (SDDC) has been mooted. Is the world ready for this? When times are hard, it is very tempting to try and get more life out of your IT equipment. However, this may not be a cost-effective way of managing your ITC platform. Even when you have a good level of control over the management and refresh of your hardware assets, there still remains the software. Many organisations have allowed their software to get out of control here are some tips about regaining control and gaining money in the process. Just how healthy is your data centre? With the way that technology and the way it is used has morphed, an organisations IT platform is now far more like a living body. How can you carry out a proper health check across the complete platform? Bring your own device (BYOD) is taxing the minds of many IT director and data centre manager at the moment. Just how can a data centre be architected so as to embrace BYOD to the benefit of the organisation and users alike? Do you need high performance computing (HPC)? More to the point, would you know if you didnt need it? If you do need it, are you sure that your data centre is up to housing a continuously running HPC system?
As 2013 closed, it was time to look back across the year and discuss what Quocirca saw as being the major happenings and news.
Quocirca 2014
-2-
Quocirca 2014
-3-
Quocirca 2014
-4-
Quocirca 2014
-5-
Quocirca 2014
-6-
Quocirca 2014
-7-
Quocirca 2014
-8-
Quocirca 2014
-9-
Quocirca 2014
- 10 -
Figure 1
A piece of brand new equipment may have lost up to a third of its intrinsic value by the time it arrives on site. The intrinsic value is what someone would pay for it on the open market brand new equipment has a higher value than second hand, even where the equipment has never even been out of its box. However, over a period of time, the rate of depreciation of an item of equipments intrinsic value levels off even a 20-year-old piece of equipment will tend to have some intrinsic value somewhere in it. In fact, wait long enough and it will become a collectors item and the intrinsic value will rise again but this is unlikely to happen with standard commercial equipment found in most data centres. The business value is more complex. Here, the value is based on how well the equipment is doing what the business needs: is it supporting application workloads that facilitate processes adequately; is it managing to deal with data requests and analysis in a rapid enough manner? The brand new equipment has no business value until it is
Quocirca 2014
- 11 -
Quocirca 2014
- 12 -
Quocirca 2014
- 13 -
Quocirca 2014
- 14 -
Quocirca 2014
- 15 -
Quocirca 2014
- 16 -
Quocirca 2014
- 17 -
Quocirca 2014
- 18 -
Is your datacentre fit for high performance computing and high availability?
The old view of high performance computing (HPC) was of highly specialised computers. These could range from specialist generalist computers (i.e. systems that were produced in significant numbers for focused workload s), such as CISC-based IBMs RS600 or HPs PA -RISC systems used in engineering and design systems through to bespoke supercomputers based on highly specialised silicon for massive number crunching. With high availability (HA), complex clusters of servers with dedicated software monitoring the heartbeat of the systems and managing failover to redundant systems was the norm. Many organisations were proud to have general systems running at above 99% availability: the aim was seen as the five nines five of 99.9995% availability, or two and a half minutes per year of unplanned downtime. Pulling the two together was very expensive and few organisations could afford highly available, high-performance systems. HPC and HA should now be part of the inherent design of systems new technical approaches such as big data and cloud computing require technology platforms that can not only meet the resource levels that are required by the workloads, but can adapt resources in real time to meet changes in the workl oads needs and do all of this on a continuous basis with little to no planned or unplanned downtime. To create an HA HPC datacentre is now more of an achievable aim than ever. Virtualisation and cloud computing provide the basic means to gain good levels of availability. By intelligently virtualising the three main resource pools of compute, storage and network, the failure of any single item should have almost no impact on the running of the rest of the platform. Almost no impact is the key here if the virtualised pool consists of, say 100 items, a single failure is only 1% of the total and the performance impact should be minimal. However, if the virtualised pool is only 10 items, then the hit will be 10%. Note also that although the use of virtualisation provides a better level of inherent availability, it is not a universal panacea. Virtual images of applications, virtual storage pools and virtual network paths are still dependent on the physical resources assigned to them, and the data centre design must take this into account. If the server running the virtual image fails, then it will still be necessary to spin up a new image elsewhere on the physical server system and reassign connections. With the right software in place such as VMwares vSphere HA, Veeam Backup & Replication or Vision Solutions Double-Take recovery from such failure can be automated and the impact to the organisation minimised, with end users often not being aware of any break to their service. At the storage level, mirroring of live data will be required. For true HA, this will need to be a live-live real-time synchronised approach, but near-real time can be achieved through the use of snapshots where on the failure of a storage array, a new array is rapidly spun up and made available based on copies of the live data having been previously made at regular intervals. The majority of storage vendors offer different forms of storage HA, with EMC, NetApp, Dell, IBM, HDS and HP all having enhanced versions of their general storage offerings with extra HA capabilities. With networking, the move towards fabric networks is helping to provide greater HA. Hierarchical network topologies had very basic HA capabilities based on the fact that they were best-effort constructs, rather than defined point-to-point connections. However, this also meant that they were slow, and on failure could take long periods of time to reconfigure to a point where performance was regained to any extent.
Quocirca 2014
- 19 -
Quocirca 2014
- 20 -
Quocirca 2014
- 21 -
Quocirca 2014
- 22 -
REPORT NOTE: This report has been written independently by Quocirca Ltd to provide an overview of the issues facing organisations seeking to maximise the effectiveness of todays dynamic workforce. The report draws on Quocircas extensive knowledge of the technology and business arenas, and provides advice on the approach that organisations should take to create a more effective and efficient environment for future growth.
About Quocirca
Quocirca is a primary research and analysis company specialising in the business impact of information technology and communications (ITC). With world-wide, native language reach, Quocirca provides in-depth insights into the views of buyers and influencers in large, mid-sized and small organisations. Its analyst team is made up of real-world practitioners with first-hand experience of ITC delivery who continuously research and track the industry and its real usage in the markets. Through researching perceptions, Quocirca uncovers the real hurdles to technology adoption the personal and political aspects of an organisations environment and the pressures of the need for demonstrable business value in any implementation. This capability to uncover and report back on the end-user perceptions in the market enables Quocirca to provide advice on the realities of technology adoption, not the promises.
Quocirca research is always pragmatic, business orientated and conducted in the context of the bigger picture. ITC has the ability to transform businesses and the processes that drive them, but often fails to do so. Quocircas mission is to help organisations improve their success rate in process enablement through better levels of understanding and the adoption of the correct technologies at the correct time. Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITC products and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture of long term investment trends, providing invaluable information for the whole of the ITC community. Quocirca works with global and local providers of ITC products and services to help them deliver on the promise that ITC holds for business. Quocircas clients include Oracle, IBM, CA, O2, T -Mobile, HP, Xerox, Ricoh and Symantec, along with other large and medium sized vendors, service providers and more specialist firms. Details of Quocircas work and the services it offers can be found at http://www.quocirca.com Disclaimer: This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca may have used a number of sources for the information and views provided. Although Quocirca has attempted wherever possible to validate the information received from each vendor, Quocirca cannot be held responsible for any errors in information received in this manner. Although Quocirca has taken what steps it can to ensure that the information provided in this report is true and reflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the details presented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presented here, including any and all consequential losses incurred by any organisation or individual taking any action based on such data and advice. All brand and product names are recognised and acknowledged as trademarks or service marks of their respective holders.