Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

CloudGov 2012 Highlights Government Cloud Initiatives

Federal, state, and local government agencies gathered in Washington D.C. on 16 February to participate in Cloud/Gov 2012 held at the Westin Washington D.C.  With Keynotes by David L. McLure, US General Services Administration, and Dawn Leaf, NIST, vendors and government agencies were brought up to date on federal cloud policies and initiatives.

Of special note were updates on the FedRAMP program (a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services) and NIST’s progress on standards.  “The FedRAMP process chart looks complicated” noted McLure, “however we are trying to provide support needed to accelerate the (FedRAMP vendor) approval process.

McLure also provided a roadmap for FedRAMP implementation, with FY13/Q2 targeted for full operation and FY14 planned for sustaining operations.

In a panel focusing on government case studies, David Terry from the Department of Education commented that “mobile phones are rapidly becoming the access point (to applications and data) for young people.”  Applications (SaaS) should be written to accommodate mobile devices, and “auto-adjust to user access devices.”

Tim Matson from DISA highlighted the US Department of Defense’s Forge.Mil initiative providing an open collaboration community for both the military and development community to work together in rapidly developing new applications to better support DoD activities.  While Forge.Mil has tighter controls than standard GSA (US General Services Administration)  standards, Matson emphasized “DISA wants to force the concept of change into the behavior of vendors.” Matson continued explaining that Forge.Mil will reinforce “a pipeline to support continuous delivery” of new applications.

While technology and process change topics provided a majority of  discussion points, mostly enthusiastic, David Mihalchik from Google advised “we still do not know the long term impact of global collaboration.  The culture is changing, forced on by the idea of global collaboration.”

Other areas of discussion among panel members throughout the day included the need for establishing and defining service level agreements (SLAs) for cloud services.  Daniel Burton from SalesForce.Com explained their SLAs are broken into two categories, SLAs based on subscription services, and those based on specific negotiations with government customers.   Other vendors took a stab at explaining their SLAs, without giving specific examples of their SLAs, leaving the audience without a solid answer.

NIST Takes the Leadership Role

The highlight of the day was provided by Dawn Leaf, Senior Executive for Cloud Computing with NIST.  Leaf provided very logical guidance for all cloud computing stakeholders, including vendors and users.

“US industry requires an international standard to ensure (global) competitiveness” explained Leaf.  In the past US vendors and service providers have developed standards which were not compatible with European and other standards, notably in wireless telephony, and one of NIST’s objectives is to participate in developing a global standard for cloud computing to prevent this possibility in cloud computing.

Cloud infrastructure and SaaS portability is also a high interest item for NIST.  Leaf advises that “we can force vendors into demonstrating their portability.  There are a lot of new entries in the business, and we need to force the vendors into proving their portability and interoperability.”

Leaf also reinforced the idea that standards are developed in the private sector.  NIST provides guidance and an architectural framework for vendors and the private sector to use as reference when developing those specific technical standards.  However leaf also had one caution for private industry, “industry should try to map their products to NIST references, as the government is not in a position to wait” for extended debates on the development of specific items, when the need for cloud computing development and implementation is immediate.

Further information on the conference, with agendas and participants is available at www.sia.net

Cloud Computing Wish List for 2011

2010 was a great year for cloud computing.  The hype phase of cloud computing is closing in on maturity, as the message has finally hit awareness of nearly all in the Cxx tier.  And for good reason.  The diffusion of IT-everything into nearly every aspect of our lives needs a lot of compute, storage, and network horsepower.

imageAnd,… we are finally getting to the point cloud computing is no longer explained with exotic diagrams on a white board or Powerpoint presentation, but actually something we can start knitting together into a useful tool.

The National Institute of Standards and Technology (NIST) in the United States takes cloud computing seriously, and is well on the way to setting standards for cloud computing, at least in the US.  The NIST definitions of cloud computing are already an international reference, and as that taxonomy continues to baseline vendor cloud solutions, it is a good sign we are  on the way to product maturity.

Now is the Time to Build Confidence

Unless you are an IY manager in a bleeding-edge technology company, there is rarely any incentive to be in the first-mover quadrant of technology implementation.  The intent of IT managers is to keep the company’s information secure, and provide the utilities needed to meet company objectives.  Putting a company at risk by implementing “cool stuff” is not the best career choice.

However, as cloud computing continues to mature, and the cost of operating an internal data center continues to rise (due to the cost of electricity, real estate, and equipment maintenance), IT managers really have no choice – they have to at least learn the cloud computing technology and operations environment.  If for no other reason than their Cxx team will eventually ask the question of “what does this mean to our company?”

An IT manager will need to prepare an educated response to the Cxx team, and be able to clearly articulate the following:

  • Why cloud computing would bring operational or competitive advantage to the company
  • Why it might not bring advantage to the company
  • The cost of operating in a cloud environment versus a traditional data center environment
  • The relationship between data center consolidation and cloud computing
  • The advantage or disadvantage of data center outsourcing and consolidation
  • The differences between enterprise clouds, public clouds, and hybrid clouds
  • The OPEX/CAPEX comparisons of running individual servers versus virtualization, or virtualization within a cloud environment
  • Graphically present and describe cloud computing models compared to traditional models, including the cost of capacity

Wish List Priority 1 – Cloud Computing Interoperability

It is not just about vendor lock-in.  it is not just about building a competitive environment.  it is about having the opportunity to use local, national, and international cloud computing resources when it is in the interest of your organization.

Hybrid clouds are defined by NIST, but in reality are still simply a great idea.  The idea of being able to overflow processing from an enterprise cloud to a public cloud is well-founded, and in fact represents one of the basic visions of cloud computing.  Processing capacity on demand.

But let’s take this one step further.  The cloud exchange.  We’ve discussed this for a couple of years, and now the technology needs to catch up with the concept.

If we can have an Internet Exchange, a Carrier Ethernet Exchange, and a telephone exchange – why can’t we have a Cloud Exchange?  or a single one-stop-shop for cloud compute capacity consumers to go to access a spot market for on-demand cloud compute resources?

Here is one idea.  Take your average Internet Exchange Point, like Amsterdam (AMS-IX), Frankfurt (DE-CIX), Any2, or London (LINX) where hundreds of Internet networks, content delivery networks, and enterprise networks come together to interconnect at a single point.  This is the place where the only restriction you have for interconnection of networks and resources is the capacity of your port/s connecting you to the exchange point.

Most Internet Exchange Points are colocated with large data centers, or are in very close proximity to large data centers (with a lot of dark fiber connecting the facilities).  The data centers manage most of the large content delivery networks (CDNs) facing the Internet.  Many of those CDNs have irregular capacity requirements based on event-driven, seasonal, or other activities.

The CDN can either build their colocation capacity to meet the maximum forecast requirements of their product, or they could potentially interconnect with a colocated cloud computing company for overflow capacity – at the point of Internet exchange.

The cloud computing companies (with the exception of the “Big 3”), are also – yes, in the same data centers as the CDNs.  Ditto for the enterprise networks choosing to either outsource their operations into a data center – or outsource into a public cloud provider.

Wish List – Develop a cloud computing exchange colocated, or part of large Internet Exchange Points.

Wish List Extra Credit – Switch vendors develop high capacity SSDs that fit into switch slots, making storage part of the switch back plane.

Simple and Secure Disaster Recovery Models

Along with the idea of distributed cloud processing, interoperability, and on-demand resources comes the most simple of all cloud visions – disaster recovery.

One of the reasons we all talk cloud computing is the potential for data center consolidation and recovery of CAPEX/OPEX for reallocation into development and revenue-producing activities.

However, with data center consolidation comes the equally important task of developing strong disaster recovery and business continuity models.  Whether it be through producing hot standby images of applications and data, simply backing up data into a remote (secure) location, or both, disaster recovery still takes on a high priority for 2011.

You might state “disaster recovery has been around since the beginning of computing, with 9 track tapes copies and punch cards – what’s new?”

What’s new is the reality of disaster recovery is most companies and organizations still have no meaningful disaster recovery plan.  There may be a weekly backup to tape or disk, there may even be the odd company or organization with a standby capability that limits recovery time and recovery point objectives to a day or two.  But let’s be honest – those are the exceptions.

Having surveyed enterprise and government users over the past two years, we have noticed that very, very few organizations with paper disaster recovery plans actually implement their plans in practice.  This includes many local and state governments within the US (check out some of the reports published by the National Association of State CIOs/NASCIO if you don’t believe this statement!).

Wish List Item 2 – Develop a simple, really simple and cost effective disaster recovery model within the cloud computing industry.  Make it an inherent part of all cloud computing products and services.  Make it so simple no IT manager can ever again come up with an excuse why their recovery point and time objectives are not ZERO.

Moving Towards the Virtual Desktop

Makes sense.  If cloud computing brings applications back to the SaaS model, and communications capacity and bandwidth are bringing delays –even on long distance connections, to the point us humans cannot tell if we are on a LAN or a WAN, then let’s start dumping high cost works stations.

Sure, that 1% of the IT world using CAD, graphics design, and other funky stuff will still need the most powerful computer available on the market, but the rest of us can certainly live with hosted email, other unified communications, and office automation applications.  You start your dumb terminal with the 30” screen at 0800, and log off at 1730.

If you really need to check email at night or on the road, your 3G->4G smart phone or netbook connection will provide more than adequate bandwidth to connect to your host email application or files.

This supports disaster recovery objectives, lowers the cost of expensive workstations, and allows organizations to regain control of their intellectual property.

With applications portability, at this point it makes no difference if you are using Google Apps, Microsoft 365, or some other emerging hosted environment.

Wish List Item 3 – IT Managers, please consider dumping the high end desktop workstation, gain control over your intellectual property, recover the cost of IT equipment, and standardize your organizational environment.

More Wish List Items

Yes, there are many more.  But those start edging towards “cool.”  We want to concentrate on those items really needed to continue pushing the global IT community towards virtualization.

%d bloggers like this: