Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

It is Time to Consider Wireless Mesh Networking in Our Disaster Recovery Plans

Wireless Mesh Networking (WMN) has been around for quite a few years.  However, not until recently, when protesters in Cairo and Hong Kong used utilities such as Firechat to bypass the mobile phone systems and communicate directly with each other, did mesh networking become well known.

Wireless Mesh Networking WMN establishes an ad hoc communications network using the WiFi (802.11/15/16) radios on their mobile phones and laptops to connect with each other, and extend the connectable portion of the network to any device with WMN software.  Some devices may act as clients, some as mesh routers, and some as gateways.  Of course there are more technical issues to fully understand with mesh networks, however the bottom line is if you have an Android, iOS, or software enabled laptop you can join, extend, and participate in a WMN.

In locations highly vulnerable to natural disasters, such as hurricane, tornado, earthquake, or wildfire, access to communications can most certainly mean the difference between surviving and not surviving.  However, during disasters, communications networks are likely to fail.

The same concept used to allow protesters in Cairo and Hong Kong to communicate outside of the mobile and fixed telephone networks could, and possibly should, have a role to play in responding to disasters.

An interesting use of this type of network was highlighted in a recent novel by Matthew Mather, entitled “Cyberstorm.”  Following a “Cyber” attack on the US Internet and connected infrastructures, much of the fixed communications infrastructure was rendered inoperable, and utilities depending on networks also fell under the impact.  An ad hoc WMN was built by some enterprising technicians, using the wireless radios available within most smart phones.  This allowed primarily messaging, however did allow citizens to communicate with each other – and the police, by interconnecting their smart phones into the mesh.

We have already embraced mobile phones, with SMS instant messaging, into many of our country’s emergency notification systems.  In California we can receive instant notifications from emergency services via SMS and Twitter, in addition to reverse 911.  This actually works very well, up to the point of a disaster.

WMN may provide a model for ensuring communications following a disaster.  As nearly every American now has a mobile phone, with a WiFi radio, the basic requirements for a mesh network are already in our hands.  The main barrier, today, with WMN is the distance limitations between participating access devices.  With luck WiFi antennas will continue to increase in power, reducing distance barriers, as each new generation is developed.

There are quite a few WMN clients available for smart phones, tablets, and WiFi-enabled devices today.  While many of these are used as instant messaging and social platforms today, just as with other social communications applications such as Twitter, the underlying technology can be used for many different uses, including of course disaster communications.

Again, the main limitation on using WMNs in disaster planning today is the limited number of participating nodes (devices with a WiFi radio), distance limitations with existing wireless radios and protocols, and the fact very few people are even aware of the concept of WMNs and potential deployments or uses.  The more participants in a WMN, the more robust is becomes, the better performance the WMN will support, and the better chance your voice will be heard during a disaster.

Here are a couple WMN Disaster Support ideas I’d like to either develop, or see others develop:

  • Much like the existing 911 network, a WMN standard could and should be developed for all mobile phone devices, tablets, and laptops with a wireless radio
  • Each mobile device should include an “App” for disaster communications
  • Cities should attempt to install WMN compatible routers and access points, particularly in areas at high risk for natural disasters, which could be expected to survive the disaster
  • Citizens in disaster-prone areas should be encouraged to add a solar charging device to their earthquake, wildfire, and  other disaster-readiness kits to allow battery charging following an anticipated utility power loss
  • Survivable mesh-to-Internet gateways should be the responsibility of city government, while allowing citizen or volunteer gateways (including ham radio) to facilitate communications out of the disaster area
  • Emergency applications should include the ability to easily submit disaster status reports, including photos and video, to either local, state, or FEMA Incident Management Centers

That is a start.

Take a look at Wireless Mesh Networks.  Wikipedia has a great high-level explanation, and  Google search yields hundreds of entries.  WMNs are nothing new, but as with the early days of the Internet, are not getting a lot of attention.  However maybe at sometime in the future a WMN could save your life.

Disaster Recovery as a First Step into Cloud Computing

fire-articleOrganizations see the benefits of cloud computing, however many are simply mortified at the prospect of re-engineering their operations to fit into existing cloud service technology or architectures.  So how can we make the first step? 

We (at Pacific-Tier Communications) have conducted 103 surveys over the past few months in the US, Canada, Indonesia, and Moldova on the topic of cloud computing.  The surveys targeted both IT managers in commercial companies, as well as within government organizations.

The survey results were really no different than most – IT managers in general find cloud computing and virtualization an exciting technology and service development, but they are reluctant to jump into cloud for a variety of reas0ns, including:

  • Organization is not ready (including internal politics)
  • No specific budget
  • Applications not prepared for migration to cloud
  • and lots of other reasons

The list and reasoning for not going into cloud will continue until organizations get to the point they cannot avoid the topic, probably around the time of a major technology refresh.

Disaster Recovery is Different

The surveys also indi9cated another consistent trend – most organizations still have no formal disasters recovery plan.  This is particularly common within government agencies, including those state and local governments surveyed in the United States.

IT managers in many government agencies had critical data stored on laptop computers, desktops, or in most cases their organization operating data in a server closet with either no backup, or onsite backup to a tape system with no offsite storage.

In addition, the central or controlling government/commercial  IT organization had either no specific policy for backing up data, or in a worst case had no means of backing up data (central or common storage system) available to individual branch or agency users.

When asked if cloud storage, or even dedicated storage became available with reasonable technical ease, and affordable cost, the IT managers agreed, most enthusiastically, that they would support development of automated backup and individual workstation backup to prevent data loss and reinforce availability of applications.

Private or Public – Does it Make a Difference?

While most IT managers are still worshiping at the shrine of IT Infrastructure Control, there are cracks appearing in the “Great Walls of IT Infrastructure.”  With dwindling IT budgets, and diskexplosive user and organization IT utility demand, IT managers are slowly realizing the good old days of control are nearly gone.

And to add additional tarnish to pride, the IT managers are also being faced with the probability at least some of their infrastructure will find its way into public cloud services, completely out of their domain.

On the other hand, it is becoming more and more difficult to justify building internal infrastructure when the quality, security, and utility of public services often exceeds that which can be built internally.  Of course there are exceptions to every rule, which in our discussion includes requirements for additional security for government sensitive or classified information.

That information could include military, citizen identification data, or other similar information that while securable through encryption and partition management, politically(particularly in cases where the data could possible leave the borders of a country) may not be possible to extend beyond the walls of an internal data center.

For most other information, it is quickly becoming a simple exercise in financial planning to determine whether or not a public storage service or internal storage service makes more sense. 

The Intent is Disaster Recovery and Data Backup

Getting back to the point, with nearly all countries, and in particular central government properties, being on or near high capacity telecom carriers and networks, and the cost of bandwidth plummeting, the excuses for not using network-based off-site backups of individual and organization data are becoming rare.

In our surveys and interviews it was clear IT managers fully understood the issue, need, and risk of failure relative to disaster recovery and backup.

Cloud storage, when explained and understood, would help solve the problem.  As a first step, and assuming a successful first step, pushing disaster recovery (at least on the level of backups) into cloud storage may be an important move ahead into a longer term move to cloud services.

All managers understood the potential benefits of virtual desktops, SaaS applications, and use of high performance virtualized infrastructure.  They did not always like it, but they understood within the next refresh generation of hardware and software technology, cloud computing would have an impact on their organization’s future.

But in the short term, disaster recovery and systems backup into cloud storage is the least traumatic first step ahead.

How about your organization?

Developing Disaster Recovery Models with Cloud Computing

How does a small or medium business ensure it can meet the basic needs for disaster recovery and business continuity? Whether it be Internet-facing applications, or Enterprise-facing applications and data, one of the most important issues faced by small companies is the potential loss of information and applications needed to run their operations.

Disaster Recovery Point and Time ObjectivesDisaster recovery and business continuity. Recovery point objectives and recovery time objectives. Backing up data to offsite locations, and potentially running mirrored processing sites – it is an expensive business requirement to fulfill. Particularly for budget conscious small and medium-sized companies.

Christoph Streit, founder of Hamburg-based ScaleUp Technologies, believes cloud computing may offer a very cost-effective, powerful solution for companies needing not only to protect their company’s data, but also reduce their recovery point objectives to near zero.

“In a traditional disaster recovery model the organization must have an exact duplicate of their hardware, applications, and data in the disaster recovery location” explains Christoph. “With cloud computing models it is possible to replicate applications virtually, spinning up capacity as needed to meet the processing requirements of the organization in the event a primary processing location becomes unavailable.”

ScaleUp did in fact demonstrate their ability to replicate databases between data centers in an October 2009 test with Cari.net, where ScaleUp was able to bring up a VPN appliance and replicate data and applications between Germany and Cari.net’s data center in San Diego, California.

While there may be issues with personal data being in compliance with European Data Protection Laws, nearly every company and organization around the world participates in a global market place. This means applications and data serving the global market cannot be considered local, and the next logical step is to extend access and presentation of the company’s network presence as close to the network edge (customers) as possible.

Some companies may have physical network capacity in multiple geographies, others may look to companies such as ScaleUp to develop relationships with other cloud service providers to allow “federated” relationships.

Until a true industry standard is determined to define data structures and protocols to use between cloud infrastructure and platform providers, it is probably easiest for relationships to develop between companies using the same cloud platform as a service (PaaS) application. Such is the case with ScaleUp and Cari.net, who used a common platform provided by 3Tera’s AppLogic.

The cloud service provider industry will provide a tremendous service to small and medium businesses which normally cannot afford near zero recovery time and recovery point objectives. Whether it is real-time replication of entire data bases, subsets of data bases, or simply parsing correlated data from edge locations at regular intervals, disaster recovery modeling is changing.

A backup location can be made in some cases by logging into a cloud service provider and opening an account with a credit card – or through a very fast negotiation with the service provider. Certainly not without cost, but potentially at a much lower cost of operation than in models requiring physical data center space, hardware, and operations staff at each location.

The important lesson for small companies is that both disaster recovery and a company’s ability to recover from either a physical disaster such as a fire in their data center, or data corruption, may limit or prevent a company’s ability to continue operations. Adding cloud services to the disaster recovery model may provide a very powerful, simplified, and cost-effective model to protect your business.

Is Hawaii a Candidate for International ICT Assistance?

Try a search engine query on “Hawaii CIO,” or “Hawaii Chief Information Officer.” You might get a couple corporate links pop up, or possibly the University of Hawaii’s CIO link, but the only state agency within the first two pages of links is for the Information and Communications Services Division of the Department of Accounting and General Services (DAGS). The first impression, once hitting the Hawaii Information and Communications Services Division (ICSD) landing page on the State of Hawaii’s website, is the microwave tower graphic.

The Information and Communication Services Division (ICSD) of the Department of Accounting and General Services is the lead agency for information technology in the Executive Branch. It is responsible for comprehensively managing the information processing and telecommunication systems in order to provide services to all agencies of the State of Hawaii. The ICSD plans, coordinates, organizes, directs, and administers services to insure the efficient and effective development of systems.

Information and Communications Technology in HawaiiIn fact, the Hawaii CIO, as appointed by the governor in 2004, acts in this capacity as a part time job, as his “day job” is comptroller of the State. In that role, the only true function managed within the ICSD is oversight of the state’s main data center.

Browsing through the ICSD site is quite interesting. Having spent a fair amount of time drilling through California’s CIO landing page, where you are greeted with a well stocked mashup of not less than 14 interactive objects giving access to topics from current news, to blog entries, to CIO department links, to instructions on following the CIO’s activities through Twitter, Facebook, and YouTube (all meetings and public activities are recorded and made available to the ‘Ether), I expected a similar menu of objects on the Hawaii page. Clearly Hawaii is a much smaller state, so expectations were set, but I did expect a fairly good semaphore of directions I could take to learn more about Hawaii’s office of the CIO.

Hot Buttons on the ICSD Landing Page

My vision of a governmental CIO was defined by Vivek Kundra, CIO of the United States. A guy who talks about the future of ICT, the strategy of applying ICT to government projects, the leadership, both in thoughts and actions, or the government as a role model for the rest of the country. Cloud computing, data center consolidation, green technology, R&D, cooperation with the private sector, aggressive use of COTS (common of-the-shelf) technology. I love the guy.

NOTE: ICT is a term unfamiliar to most Americans. It means “Information and Communications Technology,” and is a term most other countries around the world have adopted to acknowledge the critical role communications plays in any information technology discussion.

So I select the button on IT Standards. Cool. Being a cloud computing enthusiast, to put it mildly, I could not help pinging on the item for 11.17 Virtual Storage Access Method, with the expectation this might give me some insight on the cloud computing and virtualization initiatives Hawaii is taking under the guidance of either the CIO or ICSD.

VSAM is an IBM/MVS Operating System access method system. It is not a data base management system. VSAM supports batch users, on-line transactions, and data base applications. (VSAM Entry on ICSD website)

Multiple Virtual Storage, more commonly called MVS, was the most commonly used operating system on the System/370 and System/390
IBM mainframe computers. It was developed by IBM, but is unrelated to IBM’s other mainframe operating system, VM. (Wikipedia)

Great Symbol of the State of HawaiiMVS? You mean the MVS used in the 1970s?

Ooops. Well, how about an overview of IT Standards? Written in 2003, the document is general cut and paste information that could be found in pretty much any basic IT book, with the exception that everything is manual – meaning any standard, recommendation, or update must be done through use of CD-ROM. Well, perhaps document management and approval process doesn’t need to be online.

Enough – I am not excited by the ICSD website. Let’s look at a couple other areas that might provide a bit more information on how Hawaii is doing with topics like overall IT architecture, disaster recovery, and IT strategies.

“The CIO’s role is to provide vision and leadership for developing and implementing information technology initiatives.” (Info-Tech Research Group)

In a recent report delivered by the Hawaii state auditor, “Audit of the State of Hawai’i’s Information Technology: Who’s in Charge?” – a disturbing summary of the auditor’s findings declare:

  1. The State’s IT leaders provide weak and ineffective management.
  2. The State no longer has a lead agency for information technology.

The audit further finds the guidance and governance provided by the ICSD ineffective, stating:

“ICSD was originally tasked to compile an overall State technology plan from annual technology plans submitted by the various departments. However, ICSD no longer enforces or monitors compliance with this requirement. In fact, the division has actively discouraged departments from submitting these distributed information processing and information resource management plans.”

Finally, the report concludes with an ominous message for the state

“If the State’s management does not improve, the State will eventually be compelled to outsource or co-source IT functions, a complicated and expensive undertaking. Based on the issues that have been raised, future focus areas include data security and business continuity. Lack of an alternate data center and general lack of business continuity and disaster recovery plans tempt fate, since a major disruption of State IT services is not a matter of if, but when.”

If you would like some more interesting food for controversy, dig into the state’s disaster recovery situation, which was recently summarized with the statement “a breakdown of or interruption to data center services or telecommunication services will seriously diminish the ability of State (of Hawaii) agencies to deliver critical services to the public and other federal, state, and local government agencies. The primary data center serves all three branches of State government. The loss of the primary data center would impact all State employees, and without an alternative data center, health, public safety, child protective services, homeland security and other critical services would not be delivered”

How International Organizations Might Help Hawai’I’s ICT and eGovernment Program

United Nations Development Programme (UNDP)

A surge in the use of ICTs by government, civil society and the private sector started in the late 1990s, with the aim not only of improving government efficiency and service delivery, but also to promote increase participation of citizens in the various governance and democratic processes. The use of ICT in the overall field of democratic governance activities relates to three distinct areas where UNDP has already been doing innovative work to support the achievement of the MDGs.

  • First, e-governance which encompasses the use of the ICT tool to enhance both government efficiency, transparency, accountability and service delivery, and citizen participation and engagement in the various democratic and governance processes.
  • Second, the mainstreaming of ICT into the various UNDP Democratic Governance Practice service lines such as Parliaments (e-parliaments), elections (e-elections) and others.
  • And third, the governance of the new ICT which addresses the institutional mechanisms related to emerging issues of privacy, security, censorship and control of the means of information and communications at the national and global levels.

That sounds awful darn close to objectives Hawaii might find useful in to developing their own long term, strategic ICT plan. Taking a look at some of the countries listed in UNDP’s “UN eGovernment Survey 2008: From eGovernment to Connected Governance,” a lot of great government program case studies are included, such as the government of Singapore:

“Similarly, as part justification for ranking Singapore as its 2007 leader in e-government and customer service, Accenture reports that in terms of back-end infrastructure, the Singaporean government has made an enterprise architecture called SGEA a strategic thrust. SGEA offers a blueprint for identifying potential business areas for interagency collaboration as well as technology, data and application standards to facilitate the sharing of information and systems across agencies.”

That sounds good. As do a couple dozen other examples of equally relevant eGovernment programs included in the study. In fact, current eGovernment development projects in Vietnam, Indonesia, Ghana, and Palestine follow a well documented plan to design, train, plan, and implement eGovernment projects. And they are working.

Perhaps Hawai’I could hire a full time CIO, participate in US and international programs supporting development of eGovernment (including the US Trade and Development Agency which sponsors eGoverment programs in many developing countries – such as Palestine, Ethiopia, and Ghana), and use that to develop a 22nd century ICT plan for Hawai’i.

The line is drawn

As taxpayers and residents of America’s 50th state, we deserve the best possible government and governance possible. Let’s take a bit of responsibility on ourselves. Study the issue, contact your representative, and demand either an explanation of the current situation – or even better, give recommendations on how we can make Hawai’i’s situation better. Such as:

  1. Hire a professional state CIO
  2. Give the CIO authority
  3. Develop a state-wide ICT plan
  4. Execute

We initially touched the topic in a previous post “A Developing Country that Can Teach Hawaii a Lesson.” We’ll continue exploring the topic, and hopefully start working on positive, constructive ideas on how we can make our state more efficient, and a better place to work and live.

%d bloggers like this: