The Trouble with IT Disintermediation

Disintermediation by Pacific-Tier CommunicationsI have a client who is concerned with some of their departments bypassing the organization’s traditional IT process, and going directly to cloud vendors for their IT resource needs.  Not really unique, as the cloud computing industry has really disrupted IT provisioning processes, not to mention near complete loss of control over configuration management databases and inventories.

IT service disintermediation occurs when end users cut out the middleman when procuring ICT services, and go directly to the service provider with an independent account. Disintermediation normally occurs when one of the following conditions exist:

  1. The end user desires to remain independent, for reasons of control, use of decentralized budgets, or simply individual pride.
  2. The organizational service provider does not have a suitable resource available to meet the end user’s needs
  3. The end user does not have confidence in the organizational service provider
  4. The organizational service provider has a suitable service, however is not able or willing to provision the service in order to meet the end user’s demands for timing, capacity, or other reasons.  This is often the result of a lengthy, bureaucratic process which is not agile, flexible, or promotes a “sense of urgency” to complete provisioning tasks.
  5. The organizational service provider is not able to, or is unwilling to accommodate “special” orders which fall out of the service provider’s portfolio.
  6. The organizational service provider does not respond to rapidly changing market, technology, and usage opportunities, with the result of creating barriers for the business units to compete or respond to external conditions.

The result of this is pretty bad for any organization.  Some of the highlights of this failure may include:

  • Loss of control over IT budgets – decentralization of IT budget which do not fall within a strategic plan or policy cannot be controlled.
  • Inability to develop and maintain organizational relationships with select or approved vendors.  Vendors relish the potential of disrupting single points of contacts within large organizations, as it allows them to develop and sustain multiple high value contracts with the individual agencies, rather than falling within volume purchasing agreements, audits, standards, security, SLAs, training, and so on.
  • Individual applications will normally result in incompatible information silos.  While interoperability within an organization is a high priority, particularly when looking at service-orientation and organizational decision support systems, systems disintermediation will result in failure, or extreme difficulty in developing data sharing structure.
  • Poor Continuity of Operations and Disaster Management.  Undocumented, non-standard systems are normally not fully documented, and often are not made available to the Organization’s IT Management or support operations.  Thus, when disasters occur, there is a high risk of complete data loss in a disaster, or inability to quickly restore full services to the organization, customers, and general user base.
  • There is also difficulty in data/systems portability.  If/when a service provider fails to meet the expectation of the end user, decides to go out of business, or for some reason decides not to continue supporting the user, then the existing data and systems should be portable to another service provider (this is also within the NIST standard).

While there are certainly other considerations, this covers the main pain points disintermediation might present.

The next obvious question is how to best mitigate the condition.  This is a more difficult issue than in the past, as it is now so easy to establish an account and resources through cloud companies with a simple credit card, or aggressive sales person.In addition, the organizational service provider must follow standard architectural and governance processes, which includes continual review and improvement cycles.

As technology and organization priorities change, so must the policies change to be aware of, and accommodate reasonable change.  The end users must be fully aware of the products and services IT departments have to offer, and of course IT departments must have an aggressive sense of urgency in trying to respond and fulfill those requirements.

Responsibility falls in two areas; 1) Ensuring the organizational service provider is able to meet the needs of end users <or is able to find solutions in a timely manner to assist the end user>, and 2)  develop policies and processes which not only facilitate end user acquisition of resources, but also establishes accountability when those policies are not followed.

In addition, the organizational service provider must follow standard architectural and governance processes, which includes continual review and improvement cycles. As technology and organization priorities change, so must the policies change to be aware of, and accommodate reasonable change. The end users must be fully aware of the products and services IT departments have to offer, and of course IT departments must have an aggressive sense of urgency in trying to respond and fulfill those requirements.

The Changing Face of Technology and Innovation

GlobalReach A friend of mine’s son recently returned from an extended absence which basically removed him from nearly all aspects of technology, including the Internet, for a bit longer than 5 years. Upon return, observing him restore his awareness of technologies and absorb all things new developed over the past 5 years was both exciting and moving.

To be fair, the guy grew up in an Internet world, with access to online resources including Facebook, Twitter, and other social applications.

The interesting part of his re-introduction to the “wired” world was watching the comprehension flashes he went through when absorbing the much higher levels of application and data integration, and speed of network access.

As much as all of us continue to complain about terrible access speeds, it is remarkable to see how excited he became when learning he could get 60Mbps downloads from just a cable modem. And the ability to download HD movies to a PC in just a few moments, or stream HD videos through a local device.

Not to mention the near non-need to have CATV period to continue enjoying nearly any network or alternative programming desired.

Continuing to observe the transformation, it took him about 2 minutes to nail up a multipoint video call with 4 of his friends, take a stroll through my eBook library, and prepare a strategy for his own digital move into cloud-based applications, storage, and collaboration.

Looking back to my personal technical point of reference at the point this kid dropped out, I dug up blog articles I’ve posted with titles such as:

  • “Flattening the American Internet” (discussing the need for more Internet Exchange Points in the US)
  • “IXPs and Disaster Recovery” (the role IXPs could and should play in global disasters)
  • “2009 – The Year of IPv6 and Internet Virtualization”
  • “The Law of Plentitude and Chaos Theory”
  • “Why I Hate Kayaks” (the hypocrisy of some environmentalists)
  • “Contributing to a Cause with Technology – The World Community GRID” (the cloud before the cloud)
  • “Blackberrys, PDA Phones, and Frog Soup”

And so on…

We have come a long way technically over those years, but the amazing thing is the near immediacy of the young man absorbing those changes. I was almost afraid with all the right brain flashes that he would have a breakdown, but the enjoyment he showed diving into the new world of “apps” and anytime, anywhere computing appears to only be accelerating.

Now the questions are starting to pop up. “Can we do this now?” “It would be nice if this was possible.”

Maybe because he grew up in a gaming world, or maybe because he was dunked into the wired world about the same time he learned to stand on his own feet. Maybe the synaptic connections in his brain are just much better wired than those of my generation.

Perhaps the final, and most important revelation for me, is that young people have a tremendous capacity to exploit the technology resources developed in just a few short years. Collaboration tools which astound my generation are slow and boring to the new crew. Internet is expected, it is a utility, and it is demanded at broadband speeds which, again, to somebody whose first commercial modem was a large card capable of 300 baud (do you even know what baud means?) is still mind boggling.

The new generations are going to have a lot more fun than we did, on a global scale.

I am jealous

SDNs in the Carrier Hotel

SDN_interconnections Carrier hotels are an integral part of global communications infrastructure.  The carrier hotel serves a vital function, specifically the role of a common point of interconnection between facility-based (physical cable in either terrestrial, submarine, or satellite networks) carriers, networks, content delivery networks (CDNs), Internet Service Providers (ISPs), and even private or government networks and hosting companies.

In some locations, such as the One Wilshire Building in Los Angeles, or 60 Hudson in New York, several hundred carriers and service providers may interconnect physically within a main distribution frame (MDF), or virtually through interconnections at Internet Exchange Points (IXPs) or Ethernet Exchange points.

Carrier hotel operators understand that technology is starting to overcome many of the traditional forms of interconnection.  With 100Gbps wavelengths and port speeds, network providers are able to push many individual virtual connections through a single interface, reducing the need for individual cross connections or interconnections to establish customer or inter-network circuits.

While connections, including internet peering and VLANs have been available for many years through IXPs and use of circuit multiplexing, software defined networking (SDNs) are poised to provide a new model of interconnections at the carrier hotel, forcing not only an upgrade of supporting technologies, but also reconsideration of the entire model and concept of how the carrier hotel operates.

Several telecom companies have announced their own internal deployments of order fulfillment platforms based on SDN, including PacNet’s PEN and Level 3’s (originally Time Warner) pilot test at DukeNet, proving that circuit design and provisioning can be easily accomplished through SDN-enabled orchestration engines.

However inter-carrier circuit or service orchestration is still not yet in common use at the main carrier hotels and interconnection points.

Taking a closer look at the carrier hotel environment we will see an opportunity based on a vision which considers that if the carrier hotel operator provides an orchestration platform which allows individual carriers, networks, cloud service providers, CDNs, and other networks to connect at a common point, with standard APIs to allow communication between different participant network or service resources, then interconnection fulfillment may be completed in a matter of minutes, rather than days or weeks as is the current environment.

This capability goes even a step deeper.  Let’s say carrier “A” has an enterprise customer connected to their network.  The customer has an on-demand provisioning arrangement with Carrier “A,” allowing the customer to establish communications not only within Carrier”A’s” network resources, but also flow through the carrier hotel’s interconnection broker into say, a cloud service provider’s network.  The customer should be able to design and provision their own solutions – based on availability of internal and interconnection resources available through the carrier.

Participants will announce their available resources to the carrier hotel’s orchestration engine (network access broker), and those available resources can then be provisioned on-demnd by any other participant (assuming the participants have a service agreement or financial accounting agreement either based on the carrier hotel’s standard, or individual service agreements established between individual participants.

If we use NIST’s characteristics of cloud computing as a potential model, then the carrier hotels interconnection orchestration engine should ultimately provide participants:

  • On-demand self-service provisioning
  • Elasticity, meaning short term usage agreements, possibly even down to the minute or hour
  • Resource pooling, or a model similar to a spot market (in competing markets where multiple carriers or service providers may be able to provide the same service)
  • Measured service (usage based or usage-sensitive billing  for service use)
  • And of course broad network access – currently using either 100gbps or multiples of 100gbps (until 1tbps ports become available)

While layer 1 (physical) interconnection of network resources will always be required – the bits need to flow on fiber or wireless at some point, the future of carrier and service resource intercommunications must evolve to accept and acknowledge the need for user-driven, near real time provisioning of network and other service resources, on a global scale.

The carrier hotel will continue to play an integral role in bringing this capability to the community, and the future is likely to be based on software driven , on-demand meet-me-rooms.

PTC 2015 Wraps Up with Strong Messages on SDNs and Automation

Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.

Carrier SDNs SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.

While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.

Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.

Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards.  In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:

  • How to differentiate their service offering
  • How to differentiate their operations environment
  • How to ensure industry-acceptable delivery and provisioning time cycles
  • How to deal with legacy deployments

Alexander also emphasized that as an industry we need to get away from physical wiring when possible.   With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.

The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire).  Some examples of actual and required deployments included:

  • A bandwidth on-demand marketplace
  • Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
  • Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services

Robust discussions on standards also spawned debated.  With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards.  Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.

During the PTC 2015 session entitled  “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:

  • Open Contrail
  • Open Daylight
  • Open Stack
  • Open Flow
  • OPNFV
  • ONOS
  • OvS
  • Project Floodlight
  • Open Networking
  • and on and on….

For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service  marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.

For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.

The connection and integration of cloud services with telecom or network services is quite clear.  At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.

While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do.  In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”

Oh my…

PTC 2015 Focuses on Submarine Cables and SDNs

PTC 2015 In an informal survey of words used during seminars and discussions, two main themes are emerging at the Pacific Telecommunications Council’s 2015 annual conference.  The first, as expected, is development of more submarine cable capacity both within the Pacific, as well as to end points in ANZ, Asia, and North America.  The second, software defined networking (SDN), as envisioned could quickly begin to re-engineer the gateway and carrier hotel interconnection business.

New cable development, including Arctic Fiber, Trident, SEA-US, and APX-E have sparked a lot of interest.  One discussion at Sunday morning’s Submarine Cable Workshop highlighted the need for Asian (and other regions) need to find ways to bypass the United States, not just for performance issues, but also to avoid US government agencies from intercepting and potentially exploiting data hitting US networks and data systems.

The bottom line with all submarine cable discussions is the need for more, and more, and more cable capacity.  Applications using international communications capacity, notably video, are consuming at rates which are driving fear the cable operators won’t be able to keep up with capacity demands.

However perhaps the most interesting, and frankly surprising development is with SDNs in the meet me room (MMR).  Products such as PacNet’s PEN (PacNet Enabled Network) are finally putting reality into on-demand, self-service circuit provisioning, and soon cloud computing capacity provisioning within the MMR.  Demonstrations showed how a network, or user, can provisioning from 1Mbps to 10Gbps point to point within a minute.

In the past on demand provisioning of interconnections was limited to Internet Exchange Points.  Fiber cross connects, VLANs, and point to point Ethernet connections.  Now, as carrier hotels and MMRs acknowledge the need for rapid provisioning of elastic (rapid addition and deletion of bandwidth or capacity) resources, the physical cross connect and IXP peering tools will not be adequate for market demands in the future.

SDN models, such as PacNet’s PEN, are a very innovative step towards this vision.  The underlying physical interconnection infrastructure simply becomes a software abstraction for end users (including carriers and networks) allowing circuit provisioning in a matter of minutes, rather than days.

The main requirement for full deployment is to “sell” carriers and networks on the concept, as key success factors will revolve around the network effect of participant communities.  Simply, the more connecting and participating networks within the SDN “community,” the more value the SDN MMR brings to a facility or market.

A great start to PTC 2015.  More PTC 2015 “sidebars” on Tuesday.

You Want Money for a Data Center Buildout?

Yield to Cloud A couple years ago I attended several “fast pitch” competitions and events for entrepreneurs in Southern California, all designed to give startups a chance to “pitch” their ideas in about 60 seconds to a panel of representatives from the local investment community.  Similar to television’s “Shark Tank,” most of the ideas pitches were harshly critiqued, with the real intent of assisting participating entrepreneurs in developing a better story for approaching investors and markets.

While very few of the pitches received a strong, positive response, I recall one young guy who really set the panel back a step in awe.  The product was related to biotech, and the panel provided a very strong, positive response to the pitch.

Wishing to dig a bit deeper, one of the panel members asked the guy how much money he was looking for in an investment, and how he’d use the money.

“$5 million he responded,” with a resounding wave of nods from the panel.  “I’d use around $3 million for staffing, getting the office started, and product development.”  Another round of positive expressions.  “And then we’d spend around $2 million setting up in a data center with servers, telecoms, and storage systems.”

This time the panel looked as if they’d just taken a crisp slap to the face.  After a moment of collection, the panel spokesman launched into a dress down of the entrepreneur stating “I really like the product, and think you vision is solid.  However, with a greater then 95% chance of your company going bust within the first year, I have no desire to be stuck with $2 million worth of obsolete computer hardware, and potentially contract liabilities once you shut down your data center.  You’ve got to use your head and look at going to Amazon for your data center capacity and forget this data center idea.”

Now it was the entire audience’s turn to take a pause.

In the past IT managers really placed buying and controlling their own hardware, in their own facility, as a high priority – with no room for compromise.  For perceptions of security, a desire for personal control, or simply a concern that outsourcing would limit their own career potential, sever closets and small data centers were a common characteristic of most small offices.

At some point a need to have proximity to Internet or communication exchange points, or simple limitations on local facility capacity started forcing a migration of enterprise data centers into commercial colocation.  For the most part, IT managers still owned and controlled any hardware outsourced into the colocation facility, and most agreed that in general colocation facilities offered higher uptime, fewer service disruptions, and good performance, in particular for eCommerce sites.

Now we are at a new IT architecture crossroads.  Is there really any good reason for a startup, medium, or even large enterprise to continue operating their own data center, or even their own hardware within a colocation facility?  Certainly if the average CFO or business unit manager had their choice, the local data center would be decommissioned and shut down as quickly as possible.  The CAPEX investment, carrying hardware on the books for years of depreciation, lack of business agility, and dangers of business continuity and disaster recovery costs force the question of “why don’t we just rent IT capacity from a cloud service provider?”

Many still question the security of public clouds, many still question the compliance issues related to outsourcing, and many still simply do not want to give up their “soon-to-be-redundant” data center jobs.

Of course it is clear most large cloud computing companies have much better resources available to manage security than a small company, and have made great advances in compliance certifications (mostly due to the US government acknowledging the role of cloud computing and changing regulations to accommodate those changes).  If we look at the US Government’s FedRAMP certification program as an example, security, compliance, and management controls are now a standard – open for all organizations to study and adopt as appropriate.

So we get back to the original question, what would justify a company in continuing to develop data centers, when a virtual data center (as the first small step in adopting a cloud computing architecture) will provide better flexibility, agility, security, performance, and lower cost than operating a local of colocated IT physical infrastructure?  Sure, exceptions exist, including some specialized interfaces on hardware to support mining, health care, or other very specialized activities.  However if you re not in the computer or switch manufacturing business – can you really continue justifying CAPEX expenditures on IT?

IT is quickly becoming a utility.  As a business we do not plan to build roads, build water distribution, or build our own power generation plants.  Compute, telecom, and storage resources are becoming a utility, and IT managers (and data center / colocation companies) need to do a comprehensive review of their business and strategy, and find a way to exploit this technology reality, rather than allow it to pass us by.

Putting Enterprise Architecture Principles to Work

This week brought another great consulting gig, working with old friends and respected colleagues.  The challenge driving the consultation was brainstorming a new service for their company, and how best to get it into operation.

image The new service vision was pretty good.  The service would fill a hole, or shortfall in the industry which would better enable their customers to compete in markets both in the US and abroad.  However the process of planning and delivering this service, well, simply did not exist.

The team’s sense of urgency to deliver the service was high, based on a perception if they did not move quickly, then they would suffer an opportunity loss while competitors moved quickly to fill the service need themselves.

While it may have been easy to “jump on the bandwagon” and share the team’s enthusiasm, they lacked several critical components of delivering a new service, which included:

  • No specific product or service definition
  • No, even high level, market analysis or survey
  • No cost analysis or revenue projection
  • No risk analysis
  • No high level implementation plan or schedule

“We have great ideas from vendors, and are going to try and put together a quick pilot test as quickly as possible.  We are trying to gather a few of our customers to participate right now” stated one of the team.

At that point, reluctantly, I had to put on the brakes.  While not making any attempt to dampen the team’s enthusiasm, to promote a successful service launch I forced them to consider additional requirements, such as:

  • The need to build a business case
  • The need for integration of the service into existing back office systems, such as inventory, book-to-bank, OSS, management and monitoring, finance and billing, executive dashboards (KPIs, service performance, etc.)
  • Staffing and training requirements
  • Options of in-sourcing, outsourcing, or partnering to deliver the service
  • Developing RFPs (even simple RFPs) to help evaluate vendor options
  • and a few other major items

“That just sounds like too much work.  If we need to go through all that, we’ll never deliver the service.  Better to just work with a couple vendors and get it on the street.”

I should note the service would touch many, many people in the target industry, which is very tech-centric.  Success or failure of the service could have a major impact on the success or failure of many in the industry.

Being a card-carrying member of the enterprise architecture cult, and a proponent of other IT-related frameworks such as ITIL, COBIT, Open FAIR, and other business modeling, there are certainly bound to be conflicts between following a very structured approach to building business services, and the need for agile creativity and innovation.

In this case, asking the team to indulge me for a few minutes while I mapped out a simple, structured approach to developing and delivering the envisioned service.  By using simplified version of the TOGAF Architecture Development Method (ADM), and adding a few lines related to standards and service development methodology, such as the vision –> AS-IS –> gap analysis –> solutions development model, it did not take long for the team to reconsider their aggressive approach.

When preparing a chart of timelines using the “TOGAF Light,” or EA framework, the timelines were oddly similar to the aggressive approach.  The main difference being at the end of the EA approach the service not only followed a very logical, disciplined, measurable, governable, and flexible service.

Sounds a bit utopian, but in reality we were able to get to the service delivery with a better product, without sacrificing any innovation, agility, or market urgency.

This is the future of IT.  As we continue to move away from the frenzy of service deliveries of the Internet Age, and begin focusing on the business nature, including role IT plays in critical global infrastructures, the disciplines of following product and service development and delivery will continue to gain importance.

Business Drives Transition to IT as a Utility

Is there a point where business can safely assume they have hit the limit of what traditional IT organizations have to offer?  In an Internet and data driven world, does IT simply lack the agility and depth needed to fulfill business requirements and need for innovation?

Parts of cloud computing have chimed a loud and painful wake up call for many IT managers.  Even at the most simple level, Infrastructure as a Service (IaaS), it might be fair to say this is simply a utility to accelerate data center imagedecommissioning, and the process of physically decoupling underlying compute, storage, and network infrastructure from the business.

Due to a lack of PaaS and SaaS interface and building block standards, we still have a long ways to go before we can effectively call either utilities, or truly serve the needs of interoperability and systems integration.

Of course this idea is not new.  Negroponte kicked off the idea in his great view of the future in the “Big Switch,” with a lot of great analogies about compute, network, and storage capacity as a modern day adaptation of the electrical grid.

We like to look at the analogy of roads (won’t look at water today, but the analogy still applies).  Roads are built using standards.  In the US the Department of Transportation establishes the need, and construction standards for Interstate Highways, and US highways.  The states establish standards and requirements for state roads, and county / local governments establish standards for everything else.

The roads are standard.  We know what to expect when driving on an Interstate Highway.  Whether it be bridge height, lane sizing, on / off ramps, or even rest stops – it is hard to be surprised when driving the Interstate Highway system.

However the highway system does not unnecessarily inhibit development of vehicles which use the highways – there are hundreds of different makes, models, and sizes of vehicles on the road, and all use the same basic infrastructure.

Getting back to cloud computing, to make our IaaS a true utility, we need to ensure interoperability and portability within the IaaS underlying technologies, and allow for true on-demand portability of the physical infrastructure, management systems, provisioning systems, and billing systems.  Just like with the electrical grid.  And standards much like the highway system, with the flexibility to support predictable, innovative ideas.

Once we have removed the burden of underlying physical IT infrastructure from our planning model, we can focus our energy on higher levels of utility, including PaaS and SaaS.

Enterprise Architecture frameworks, such as TOGAF, promote the use of Architecture Building Blocks (ABB) and Solution Building Blocks (SBB).  Where ABBs may define global, industry, and local standards, SBBs provide definition for solutions which are specific to a project, and do not normally have either standards or other reusable components to draw from.  However, development of SBBs should still acknowledge and have a design which will support either an existing  standard, or broader development of new standard interfaces in the future.

This includes the most important component of open, standard, and reusable interfaces (APIs) which support service-orientation, interoperability, and portability of data.  Which may also be considered characteristics of the future PaaS and SaaS utilities.  Or in more simple terms, edging closer to the death of proprietary data or physical interfaces and functionality.

Now a reminder – at this level we are still striving to create utilities which will ultimately reduce or eliminate our need for specialized IT.  Yes, there are exceptions where specific equipment interfaces are unique to a technology, such as rock crushers in the mining industry.  However, for example, we are still able to conduct agile business on a global scale with all our customers, competitors, suppliers, and vendors all using compatible email.

That is the objective, to make the underlying infrastructure, including much of PaaS and SaaS, standard, and serve he needs of business innovation, without the danger of being inhibited by proprietary and non-standard or compatible interfaces.

Build a business on innovative ideas, create competitive or unique selling points and products, focus energy on developing those innovations, and relieve yourselves of the burden resulting from carrying excessive and unproductive IT infrastructure below the business.

And then IT is a utility

Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

It is Time to Consider Wireless Mesh Networking in Our Disaster Recovery Plans

Wireless Mesh Networking (WMN) has been around for quite a few years.  However, not until recently, when protesters in Cairo and Hong Kong used utilities such as Firechat to bypass the mobile phone systems and communicate directly with each other, did mesh networking become well known.

Wireless Mesh Networking WMN establishes an ad hoc communications network using the WiFi (802.11/15/16) radios on their mobile phones and laptops to connect with each other, and extend the connectable portion of the network to any device with WMN software.  Some devices may act as clients, some as mesh routers, and some as gateways.  Of course there are more technical issues to fully understand with mesh networks, however the bottom line is if you have an Android, iOS, or software enabled laptop you can join, extend, and participate in a WMN.

In locations highly vulnerable to natural disasters, such as hurricane, tornado, earthquake, or wildfire, access to communications can most certainly mean the difference between surviving and not surviving.  However, during disasters, communications networks are likely to fail.

The same concept used to allow protesters in Cairo and Hong Kong to communicate outside of the mobile and fixed telephone networks could, and possibly should, have a role to play in responding to disasters.

An interesting use of this type of network was highlighted in a recent novel by Matthew Mather, entitled “Cyberstorm.”  Following a “Cyber” attack on the US Internet and connected infrastructures, much of the fixed communications infrastructure was rendered inoperable, and utilities depending on networks also fell under the impact.  An ad hoc WMN was built by some enterprising technicians, using the wireless radios available within most smart phones.  This allowed primarily messaging, however did allow citizens to communicate with each other – and the police, by interconnecting their smart phones into the mesh.

We have already embraced mobile phones, with SMS instant messaging, into many of our country’s emergency notification systems.  In California we can receive instant notifications from emergency services via SMS and Twitter, in addition to reverse 911.  This actually works very well, up to the point of a disaster.

WMN may provide a model for ensuring communications following a disaster.  As nearly every American now has a mobile phone, with a WiFi radio, the basic requirements for a mesh network are already in our hands.  The main barrier, today, with WMN is the distance limitations between participating access devices.  With luck WiFi antennas will continue to increase in power, reducing distance barriers, as each new generation is developed.

There are quite a few WMN clients available for smart phones, tablets, and WiFi-enabled devices today.  While many of these are used as instant messaging and social platforms today, just as with other social communications applications such as Twitter, the underlying technology can be used for many different uses, including of course disaster communications.

Again, the main limitation on using WMNs in disaster planning today is the limited number of participating nodes (devices with a WiFi radio), distance limitations with existing wireless radios and protocols, and the fact very few people are even aware of the concept of WMNs and potential deployments or uses.  The more participants in a WMN, the more robust is becomes, the better performance the WMN will support, and the better chance your voice will be heard during a disaster.

Here are a couple WMN Disaster Support ideas I’d like to either develop, or see others develop:

  • Much like the existing 911 network, a WMN standard could and should be developed for all mobile phone devices, tablets, and laptops with a wireless radio
  • Each mobile device should include an “App” for disaster communications
  • Cities should attempt to install WMN compatible routers and access points, particularly in areas at high risk for natural disasters, which could be expected to survive the disaster
  • Citizens in disaster-prone areas should be encouraged to add a solar charging device to their earthquake, wildfire, and  other disaster-readiness kits to allow battery charging following an anticipated utility power loss
  • Survivable mesh-to-Internet gateways should be the responsibility of city government, while allowing citizen or volunteer gateways (including ham radio) to facilitate communications out of the disaster area
  • Emergency applications should include the ability to easily submit disaster status reports, including photos and video, to either local, state, or FEMA Incident Management Centers

That is a start.

Take a look at Wireless Mesh Networks.  Wikipedia has a great high-level explanation, and  Google search yields hundreds of entries.  WMNs are nothing new, but as with the early days of the Internet, are not getting a lot of attention.  However maybe at sometime in the future a WMN could save your life.

%d bloggers like this: