How to Prepare for the Coming Age of Dynamic Infrastructure

Infrastructure 2.0 Journal

Subscribe to Infrastructure 2.0 Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Infrastructure 2.0 Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Infrastructure 2.0 Authors: Liz McMillan, Ravi Rajamiyer, Derek Weeks, PagerDuty Blog, AppNeta Blog

Related Topics: Cloud Computing, Infrastructure 2.0 Journal, Storage Journal

Article

Hyper Converged Infrastructure | @CloudExpo #BigData #IoT #API #DevOps

Hyper converged is not meant for a complex performance sensitive environment

Hyper Converged Infrastructure, a Future Death Trap

In late 1990s, storage and networking came out of compute for a reason. Both networking and storage need  some specialized processing and it doesn't make sense for all the general purpose servers doing this job. It is better handled by the specialized group of dedicated devices. The critical element in the complete data center infrastructure is data. It may be better to keep this data in the special devices with the required level of redundancy than spreading across the entire data centers. However, hyper convergence emerged for a noble cause of ease deployment for a very small scale branch office scenarios since it  is always complex to setup and operate traditional SAN. The real problem starts when we attempt to replicate this layout into large scale environment with the transactional workload. Three  predominant issues can hit the hyper converged deployments hard and it can spell a death trap.  While sophisticated IT houses know these problems and stay away from the hyper convergence, but others can fall prey to this hype cycle.

Performance Nightmares
Everybody  jumped on to virtualization way before the complete virtualization stack was ready with respect to compute, network and storage. Many of them were struggling to isolate their problems among these three components. Some intelligent lot realized that storage level IO contentions are the root cause of most of their performance related issues and looking for the new class of storage products guaranteeing performance at volume and VM level. Just imagine the magnitude of complexity if all these 3 components are put together in the form of hyper convergence and each IO needs to touch these multiple general purpose servers to complete one application level transaction in the loaded environment. Some of the issues may not surface, when the infrastructure is not loaded.

To make the things worse, during economic downtime which is due for some time, data doesn't stop growing but IT budget stops. IT houses tend to load the existing hardware infrastructure to the maximum level during such time. While all these performance issues related to  the misfit architecture pops up, further cost cutting will kick in to reduce the IT headcount. Isn't it a real death trap which CIOs of cloud providers and enterprise need to avoid.

Hardware Refresh
It is common for the storage vendors to just replace the  storage head  as part of the refresh cycle while data stays intact in the separate shelves. This refresh cycle will be complex if the data is distributed across all the devices in the form of internal disks across the data centers. And refresh cycles for compute and storage is different - disks stay longer than typical servers. In the hyper converged case, we need to replace everything at one time. This requires a tremendous amount of IT staff hours. Worse if this comes in the middle of an economic crisis.

Storage Expansion
If there is a need for a storage expansion, the customer ended up buying expensive servers in the hyper converged environment. Some Web scale companies are already facing this problem and moving the storage out of the server.

At the outset, Hyper convergence looks to be an attractive option seemingly providing lot of flexibility. In reality, it comes with so many limitations and curtails the flexibility to grow the resources independent of each other. In addition, a performance nightmare bound to hit once the system gets loaded.

More Stories By Felix Xavier

Recognized as one of the top 250 MSP thought and entrepreneurial leaders globally by MSPmentor, Felix Xavier has more than 15 years of development and technology management experience. With the right blend of expertise in both networking and storage technologies, he co-founded CloudByte.

Felix has built many high-energy technology teams, re-architected products and developed core features from scratch. Most recently, Felix helped NetApp gain leadership position in storage array-based data protection by driving innovations around its product suite. He has filed numerous patents with the US patent office around core storage technologies.

Prior to this, Felix worked at Juniper, Novell and IBM, where he handled networking technologies, including LAN, WAN and security protocols and Intrusion Prevention Systems (IPS). Felix has a master’s degrees in technology and business administration.