How to Prepare for the Coming Age of Dynamic Infrastructure

Infrastructure 2.0 Journal

Subscribe to Infrastructure 2.0 Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Infrastructure 2.0 Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Infrastructure 2.0 Authors: Pat Romanski, Liz McMillan, Ravi Rajamiyer, Derek Weeks, PagerDuty Blog

Related Topics: Cloud Computing, Virtualization Magazine, Infrastructure On Demand, Cloudonomics Journal, Infrastructure 2.0 Journal, Datacenter Automation, Cloud Hosting & Service Providers Journal, Private Cloud

Private Cloud: Blog Feed Post

And the Killer App for Private Cloud Computing Is

Automating components is easy. It’s automating processes that’s hard

The premise that if you don’t have an infrastructure comprised solely of Infrastructure 2.0 components then you cannot realize an automated, on-demand data center is, in fact, wrong. While the capabilities of modern hardware that come with Infrastructure 2.0 such as a standards-based API able to be leveraged by automation systems certainly makes the task all the more simple, it is not the only way that components can be automated. In fact, “legacy” infrastructure has been automated for years using other mechanisms that can certainly be incorporated into the dynamic data center model.

When it’s time to upgrade or purchase new solutions, those components enabled with standards-based APIs should certainly be considered before those without, but there’s no reason that a hybrid data center replete with both legacy and dynamic infrastructure components cannot be automated in such a way as to form the basis for a “private cloud.” The thought that you must have a homogeneous infrastructure is not only unrealistic it’s also indicative of a too-narrow focus on the individual components rather than systems – and processes - that make up data center operations.

In “The Case Against Private Clouds” Bernard Golden blames the inability to automate legacy infrastructure for a yet-to-occur failure in private cloud implementation:

The key to automating the bottom half of the chart -- the infrastructure portion -- is to use equipment that can be configured remotely with automated measures. In other words, the equipment must be capable of exposing an API that an automated configuration system can interact with. This kind of functionality is the hallmark of up-to-date equipment. Unfortunately, most

data centers are full of equipment that does not have this functionality; instead they have a mishmosh of equipment of various vintages, much of which requires manual configuration. In other words, automating much of the existing infrastructure is a non-starter.

The claim that legacy infrastructure is going to require manual configuration and therefore automating most of the infrastructure is a “non-starter” is the problem here. In other words, if you have “legacy” infrastructure in your data center you can’t build a private cloud because there’s no way to automate its configuration and management.

Identity Management Systems (IDMS) focusing on provisioning and process management have long ago solved this particular problem as has a

plethora of automation and scripting-focused vendors that provide automation technology for network and systems management tasks. CMDB (Configuration Management Database) technology, too, has some capabilities around automating the configuration of network-focused devices that could easily be extended to include a wider variety of network and application network infrastructure.

Any network or systems’ administrator worth their salt can whip up a script (PowerShell, bash, korn, whatever) that can automatically SSH into a remote network device or system and launch another script to perform X or Y and Z. This is not rocket science, this isn’t even very hard. We’ve been doing this for as long as we’ve had networked systems that needed management.

What is hard, and what’s going to make “private” clouds difficult to implement is orchestration and management. That’s hard, and largely immature at this stage because you’re automating processes (i.e.  orchestration) not systems.

That’s really what’s key to a cloud implementation, not the automation of individual components in the network and application infrastructure.


AUTOMATION IS EASY. ORCHESTRATION IS WHAT’S HARD.


Anyone can automate a task on an individual data center component. But automating a series of tasks, i.e. a process, is much more difficult because it not only requires an understanding of the process but is also essentially “integration”. And integration of systems, whether on the software side of the data center or the network and application network side of the data center, is painful. It should be a four letter word – a curse – and though it isn’t considered one it’s often vocalized with the same tone and intention as one invoking some form of ancient curse.

But I digress. The point is not that integration is hard – everyone knows that – but that it’s the integration and collaboration of components that comprise the automation of processes, i.e. orchestration, that makes building a “private cloud” difficult.

Management and orchestration solutions that can easily integrate both legacy infrastructure and Infrastructure 2.0 via standards-based APIs and traditional “hacks” requiring secure remote access and remote execution of scripts is the “killer app” for “private cloud computing”.

It’s already been done in the identity management space (IDMS). It’s already been done in the business and application space (BPM). It should be no surprise that it will, eventually, be “done” in the infrastructure world. Folks watching the infrastructure and cloud computing space just have to stop looking at two layers of the stack and broaden their view a bit to realize that the answer isn’t going to be found solely within the confines of infrastructure. Like the model and applications it hosts, it’s going to be found in a collaborative effort involving components, systems, and people.

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.