How to Prepare for the Coming Age of Dynamic Infrastructure

Infrastructure 2.0 Journal

Subscribe to Infrastructure 2.0 Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Infrastructure 2.0 Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


A new year is often a time for reflection on the past and pondering the future.  2010 was certainly a momentous year for cloud computing.  An explosion of tools for creating clouds, a global investment rush by service providers, a Federal “cloud first” policy, and more.  But in the words of that famous Bachman Turner Overdrive song — “You ain’t seen nothin’ yet!”

In fact, I’d suggest that in terms of technological evolution, we’re really just in the Bronze Age of cloud.  I have no doubt that at some point in the not too distant future, today’s cloud services will look as quaint as an historical village with no electricity or running water.  The Wired article on AI this month is part of the inspiration for what comes next.  After all, if a computer can drive a car with no human intervention, why can’t it run a data center?

Consider this vision of a future cloud data center.

The third of four planned 5 million square foot data centers quietly hums to life.  In the control center, banks of monitors show data on everything from number of running cores, to network traffic to hotspots of power consumption.  Over 100,000 ambient temperature and humidity sensors keep track of the environmental conditions, while three cooling towers vent excess heat generated by the massively dense computing and storage farm.

The hardware, made to exacting specifications and supplied by multiple vendors, uses liquid coolant instead of fans – making this one of the quietest and most energy-efficient data centers on the planet.  The 500U racks reach 75 feet up into the cavernous space, though the ceiling is yet another 50 feet higher where the massive turbines draw cold air up through the floors.  Temperature is relatively steady as you go up the racks due to innovative ductwork that vents cold air every 5 feet as you climb.

Advanced robots wirelessly monitor the 10GBps data stream put off by all of the sensors, using their accumulated “knowledge and experience” to swap out servers and storage arrays before they fail. Specially designed connector systems enable individual pieces or even blocks of hardware to be snapped in and out like so many Lego blocks – no cabling required.  All data moves on a fiber backbone at multiple terabytes per second.

On the data center floor, there are no humans.  The PDUs, cooling systems and even the robots themselves are maintained by robots – or shipped out of the data center into an advanced repair facility when needed.  In fact, the control center is empty too – the computers are running the data center.  The only people here are in the shipping bay, in-boarding the new equipment and shipping out the old and broken, and then only when needed.  Most of these work for the shippers themselves.  The data center has no full-time employees.  Even security and access control for the very few people allowed on the floor for emergencies is managed by computers attached to iris and handprint scanners.

The positioning and placement of storage and compute resources makes no sense to the human eye.  In fact, it is sometimes rearranged by the robots based on changing demands placed on the data center – or changes that are predicted based on past computing needs.  Often this is based on private computing needs of the large corporate and government clients who want (and will pay for) increased isolation and security.  The bottom line – this is optimized far beyond what a logical human would achieve.

Tens of millions of cores, hundreds of exabytes of data, no admins.  Sweet.

The software automation is no less impressive.  Computing workloads and data are constantly optimized by the AI-based predictive modeling and management systems.  Data and computing tasks are both considered to be portable – one moving to the other when needed.  Where large data is required, the compute tasks are moved to be closer to the data.  When only a small amount of data is needed, it will often make the trip to the compute server.  Of course, latency requirements also play a part.  A lot of the data in the cloud is maintained in memory — automatically based on demand patterns.

The security AI is in a constant and all-out running battle with the bots, worms and viruses targeting the data center.  All server images are built with agents and monitoring tools to track anomalies and attack patterns that are constantly updated.  Customers can subscribe to various security services and the image management system automatically checks for compliance. Most servers are randomly re-imaged throughout the day based on the assumption that the malware will eventually find a way to get in.

Everything is virtualized – servers, storage, networking, data, databases, application platforms, middleware and more.  And it’s all as a service, with unlimited scale-out (and scale-in) of all components.  Developers write code, but don’t install or manage most application infrastructure and middleware components.  It’s all there and it all just works.

Component-level failure is assumed and has no impact on running applications.  Over time, as the AI learns, reliability of the software infrastructure underlying any application exceeds 99.999999%.

Everything is controllable through APIs, of course.  And those APIs are all standards-based so tools and applications are portable among clouds and between internal data centers and external clouds.

All application code and data is geographically dispersed so even the failure of this mega data center has a minimal impact on applications.  Perhaps a short hiccup is experienced, but it lasts only seconds before the applications and data pick up and keep on running.

Speaking of applications, this cloud data center hosts thousands of SaaS solutions for everything from ERP, CRM, e-commerce, analytics, business productivity and more. Horizontal and vertical applications too.  All exposed through Web services APIs so new applications – mashups – can be created that combine them and the data in interesting new use cases.  The barriers between IaaS, PaaS and SaaS are blurred and operationally barely exist at all.

All of this is delivered at a fraction of the cost of today’s IT model.

Large data center providers using today’s automation methods and processes are uncompetitive. Many are on the verge of going out of business and others are merging in order to survive.  A few are going into higher-level offerings – creating custom solutions and services.

The average enterprise data center budget is 1/10th of what it used to be. Only the applications that are too expensive to move or otherwise lack suitability for cloud deployment are still in-house managed by an ever-dwindling pool of IT operations specialists (everybody else has been retrained in cloud governance and management, or found other careers to pursue).  Everything else is either a SaaS app or otherwise cloud-hosted.

Special-purpose clouds within clouds are easily created on the fly, and just as easily destroyed when no longer needed.

The future of the cloud data center is AI-managed, highly optimized, and incredibly powerful at a scale never before imagined.  The demand for computing power and storage continues to grow at ever increasing rates.  Pretty soon, the data center described above will be considered commonplace, with scores or even hundreds of them sprinkled around the globe.

This is the future – will you be ready?

Read the original blog entry...

More Stories By John Treadway

John Treadway is a Vice President at Cloud Technology Partners and has over 20 years of experience delivering technology and business solutions to domestic and global enterprises across multiple industries and sectors. As a senior enterprise technology and services executive, he has a successful track record of leading strategic cloud computing and data center initiatives. John is responsible for technology IP at Cloud Technology Partners, and is actively involved with client projects and strategic alliances. John is also an active blogger in the cloud computing space and authors the CloudBzz blog. Sites/Blogs CloudBzz

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.