How to Prepare for the Coming Age of Dynamic Infrastructure

Infrastructure 2.0 Journal

Subscribe to Infrastructure 2.0 Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Infrastructure 2.0 Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Cloud Computing on Ulitzer

Agreed that cloud vendors need to differentiate on services. Disagreed that cloud standards will not forward that cause and that virtualization platform makes a difference.

image The battle for virtualization platform dominance rages on, but it will not be virtualization that makes or breaks a cloud computing offering; it will be the diversity – or lack thereof - of the services it offers. We need to stop focusing on virtualization as the be-all and end-all of cloud computing and start bending our efforts toward what really matters: the ability of providers to efficiently offer a broad set of differentiating services and of customers to take advantage of them to architect a cloud-based solution that delivers their applications efficiently, securely, and as fast with as little manual interference as possible. Citrix CTO Simon Crosby touches on this point in a recent interview with John Furrier, “VMWare Had Nothing To Do With The Cloud Trend. Their Strategy is Flawed”, on the topic of cloud computing and virtualization.

blockquote I don’t see any viable opportunities for cloud vendors if all of them are offering a homogeneous set of services designed by one company called VMWare. That whole concept is broken.

I’m going to agree with Simon with the caveat that he could have ended his statement at “homogeneous set of services” and the concept would still be broken. Cloud computing isn’t about any one virtualization platform, it’s about the services that it can provide – from the network to storage to application network to ease of provisioning and management. Those services must eventually encompass the whole of a traditional IT infrastructure and they must be accessible and manageable by the customer. And they need to be portable across cloud implementations lest customers continue to balk at the prospect of locking themselves in to any one cloud computing provider or architecture. Crosby questioned the need for Cloud APIs and standardization a little later in his interview saying, “Should Cloud APIs be standardized? If there was a standard then all the clouds would look the same.”

I disagree. The existence of standards would allow cloud providers – and more importantly cloud customers – to differentiate. Would all clouds look the same from the outside with standards? Yes. Would they act the same way on the inside? Absolutely not – at least one hopes not. The creation of standardized Cloud APIs is about portability and management and accessibility on the outside, which is really about interoperability from a service offering point of view. A standardized Cloud API really has very little to do with the internal implementation; it’s simply an abstracted communications layer interface. It’s SOA in the purest sense – the separation of the interface (API) from the implementation (the underlying “cloud” infrastructure). Standards are certainly part of what’s needed, eventually, to allow potential customers to explore cloud computing offerings and enable organizations to take advantage of concepts like cloud balancing, but Simon’s point about offering up homogenous services being dangerous to the longevity of cloud providers is really about services on the inside not the outside, and it is on the inside that standardization is absolutely necessary to bring about the ability for providers to offer up not only a heterogeneous set of infrastructure services, but simply to provide the choice and control that is inherently lost when moving an application to a cloud provider today.

The existence of APIs and standards inside, a la Infrastructure 2.0, would increase the ability of providers to integrate into their comprehensive management and orchestration systems more choices across infrastructure offerings, thus providing a broader set of options for customers in architecting a cloud-based infrastructure that best suits the needs of their applications. Without such standards cloud providers are faced with a limited set of choices and all of them lead to the same result: a restricted set of services that may or may not allow the provider to differentiate and add value atop the value already offered by what is essentially cheap, managed compute resources. It’s what cloud providers can build using those standards to expose services that will give them a competitive advantage.


IT’S STILL ABOUT THE APPLICATIONS

Simon later says, “At the end of the day the IT job is to deliver applications, and those applications today are sophisticated things composed of multiple runtime entities or multiple virtual machines.” I couldn’t agree more and Simon’s reminder is timely as we begin to see more and more interest in long-distance migration of “applications” across physical locations. The focus of any IT infrastructure and architecture is to deliver applications to customers, users, and partners and to do it in a way that’s fast and secure. In many cases the application – whether by design or accident – simply can’t meet these goals. In some instances it’s the case that the application, in order to be secure against attack and compromise, needs the assistance of IT infrastructure because some classes of attacks are directed not at the application, but at its supporting infrastructure: the application server, the network stack, the operating system, the physical device.

We know that the choice of load balancing algorithm has a direct impact on both the efficiency and performance of applications, yet many of the load balancing offerings today are “one size fits all” and do not allow customers control over what algorithm is used, or whether optimizations are applied. It is well understood that some application delivery services require visibility on both the client and server side of the delivery equation, yet this visibility is denied to customers. The purpose of application delivery is to extend the reach of the application out into the network, to be able to leverage services residing in the network to improve performance, increase capacity, and make more efficient the practice of securing the application. It’s about an architecture that’s designed to make the most of out all resources, not just remove the burden of acquiring and managing physical servers.

The existence of standards would allow these kinds of services to be offered, and allow them to be more varied. If every application delivery solution could provide its services via a standard API – and similarly expose what services are available – then cloud providers could leverage those APIs to offer up more choice to customers, including services from competing solutions. Cloud computing is about integration and collaboration; it’s about flexibility not just in scalability but in architecture. What should be offered is the means by which customers can pick and choose from among a broad selection of infrastructure services based on capability, price, and the specific needs of each application. Such an offering is highly unlikely to come to fruition without standards simply because of the time and effort required to integrate fifteen or twenty different APIs with a single, cloud management and orchestration system.

Taking Simon’s statement slightly out of context, I agree: the whole concept of offering homogenous services is broken. But I’ll go further and say that no single virtualization vendor (or any other vendor, for that matter) is going to be able to offer to providers – and thus customers – the entire set of infrastructure services required to efficiently and securely deliver an application.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.