The Trouble with IT Disintermediation

Disintermediation by Pacific-Tier CommunicationsI have a client who is concerned with some of their departments bypassing the organization’s traditional IT process, and going directly to cloud vendors for their IT resource needs.  Not really unique, as the cloud computing industry has really disrupted IT provisioning processes, not to mention near complete loss of control over configuration management databases and inventories.

IT service disintermediation occurs when end users cut out the middleman when procuring ICT services, and go directly to the service provider with an independent account. Disintermediation normally occurs when one of the following conditions exist:

  1. The end user desires to remain independent, for reasons of control, use of decentralized budgets, or simply individual pride.
  2. The organizational service provider does not have a suitable resource available to meet the end user’s needs
  3. The end user does not have confidence in the organizational service provider
  4. The organizational service provider has a suitable service, however is not able or willing to provision the service in order to meet the end user’s demands for timing, capacity, or other reasons.  This is often the result of a lengthy, bureaucratic process which is not agile, flexible, or promotes a “sense of urgency” to complete provisioning tasks.
  5. The organizational service provider is not able to, or is unwilling to accommodate “special” orders which fall out of the service provider’s portfolio.
  6. The organizational service provider does not respond to rapidly changing market, technology, and usage opportunities, with the result of creating barriers for the business units to compete or respond to external conditions.

The result of this is pretty bad for any organization.  Some of the highlights of this failure may include:

  • Loss of control over IT budgets – decentralization of IT budget which do not fall within a strategic plan or policy cannot be controlled.
  • Inability to develop and maintain organizational relationships with select or approved vendors.  Vendors relish the potential of disrupting single points of contacts within large organizations, as it allows them to develop and sustain multiple high value contracts with the individual agencies, rather than falling within volume purchasing agreements, audits, standards, security, SLAs, training, and so on.
  • Individual applications will normally result in incompatible information silos.  While interoperability within an organization is a high priority, particularly when looking at service-orientation and organizational decision support systems, systems disintermediation will result in failure, or extreme difficulty in developing data sharing structure.
  • Poor Continuity of Operations and Disaster Management.  Undocumented, non-standard systems are normally not fully documented, and often are not made available to the Organization’s IT Management or support operations.  Thus, when disasters occur, there is a high risk of complete data loss in a disaster, or inability to quickly restore full services to the organization, customers, and general user base.
  • There is also difficulty in data/systems portability.  If/when a service provider fails to meet the expectation of the end user, decides to go out of business, or for some reason decides not to continue supporting the user, then the existing data and systems should be portable to another service provider (this is also within the NIST standard).

While there are certainly other considerations, this covers the main pain points disintermediation might present.

The next obvious question is how to best mitigate the condition.  This is a more difficult issue than in the past, as it is now so easy to establish an account and resources through cloud companies with a simple credit card, or aggressive sales person.In addition, the organizational service provider must follow standard architectural and governance processes, which includes continual review and improvement cycles.

As technology and organization priorities change, so must the policies change to be aware of, and accommodate reasonable change.  The end users must be fully aware of the products and services IT departments have to offer, and of course IT departments must have an aggressive sense of urgency in trying to respond and fulfill those requirements.

Responsibility falls in two areas; 1) Ensuring the organizational service provider is able to meet the needs of end users <or is able to find solutions in a timely manner to assist the end user>, and 2)  develop policies and processes which not only facilitate end user acquisition of resources, but also establishes accountability when those policies are not followed.

In addition, the organizational service provider must follow standard architectural and governance processes, which includes continual review and improvement cycles. As technology and organization priorities change, so must the policies change to be aware of, and accommodate reasonable change. The end users must be fully aware of the products and services IT departments have to offer, and of course IT departments must have an aggressive sense of urgency in trying to respond and fulfill those requirements.

SDNs in the Carrier Hotel

SDN_interconnections Carrier hotels are an integral part of global communications infrastructure.  The carrier hotel serves a vital function, specifically the role of a common point of interconnection between facility-based (physical cable in either terrestrial, submarine, or satellite networks) carriers, networks, content delivery networks (CDNs), Internet Service Providers (ISPs), and even private or government networks and hosting companies.

In some locations, such as the One Wilshire Building in Los Angeles, or 60 Hudson in New York, several hundred carriers and service providers may interconnect physically within a main distribution frame (MDF), or virtually through interconnections at Internet Exchange Points (IXPs) or Ethernet Exchange points.

Carrier hotel operators understand that technology is starting to overcome many of the traditional forms of interconnection.  With 100Gbps wavelengths and port speeds, network providers are able to push many individual virtual connections through a single interface, reducing the need for individual cross connections or interconnections to establish customer or inter-network circuits.

While connections, including internet peering and VLANs have been available for many years through IXPs and use of circuit multiplexing, software defined networking (SDNs) are poised to provide a new model of interconnections at the carrier hotel, forcing not only an upgrade of supporting technologies, but also reconsideration of the entire model and concept of how the carrier hotel operates.

Several telecom companies have announced their own internal deployments of order fulfillment platforms based on SDN, including PacNet’s PEN and Level 3’s (originally Time Warner) pilot test at DukeNet, proving that circuit design and provisioning can be easily accomplished through SDN-enabled orchestration engines.

However inter-carrier circuit or service orchestration is still not yet in common use at the main carrier hotels and interconnection points.

Taking a closer look at the carrier hotel environment we will see an opportunity based on a vision which considers that if the carrier hotel operator provides an orchestration platform which allows individual carriers, networks, cloud service providers, CDNs, and other networks to connect at a common point, with standard APIs to allow communication between different participant network or service resources, then interconnection fulfillment may be completed in a matter of minutes, rather than days or weeks as is the current environment.

This capability goes even a step deeper.  Let’s say carrier “A” has an enterprise customer connected to their network.  The customer has an on-demand provisioning arrangement with Carrier “A,” allowing the customer to establish communications not only within Carrier”A’s” network resources, but also flow through the carrier hotel’s interconnection broker into say, a cloud service provider’s network.  The customer should be able to design and provision their own solutions – based on availability of internal and interconnection resources available through the carrier.

Participants will announce their available resources to the carrier hotel’s orchestration engine (network access broker), and those available resources can then be provisioned on-demnd by any other participant (assuming the participants have a service agreement or financial accounting agreement either based on the carrier hotel’s standard, or individual service agreements established between individual participants.

If we use NIST’s characteristics of cloud computing as a potential model, then the carrier hotels interconnection orchestration engine should ultimately provide participants:

  • On-demand self-service provisioning
  • Elasticity, meaning short term usage agreements, possibly even down to the minute or hour
  • Resource pooling, or a model similar to a spot market (in competing markets where multiple carriers or service providers may be able to provide the same service)
  • Measured service (usage based or usage-sensitive billing  for service use)
  • And of course broad network access – currently using either 100gbps or multiples of 100gbps (until 1tbps ports become available)

While layer 1 (physical) interconnection of network resources will always be required – the bits need to flow on fiber or wireless at some point, the future of carrier and service resource intercommunications must evolve to accept and acknowledge the need for user-driven, near real time provisioning of network and other service resources, on a global scale.

The carrier hotel will continue to play an integral role in bringing this capability to the community, and the future is likely to be based on software driven , on-demand meet-me-rooms.

PTC 2015 Wraps Up with Strong Messages on SDNs and Automation

Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.

Carrier SDNs SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.

While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.

Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.

Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards.  In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:

  • How to differentiate their service offering
  • How to differentiate their operations environment
  • How to ensure industry-acceptable delivery and provisioning time cycles
  • How to deal with legacy deployments

Alexander also emphasized that as an industry we need to get away from physical wiring when possible.   With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.

The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire).  Some examples of actual and required deployments included:

  • A bandwidth on-demand marketplace
  • Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
  • Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services

Robust discussions on standards also spawned debated.  With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards.  Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.

During the PTC 2015 session entitled  “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:

  • Open Contrail
  • Open Daylight
  • Open Stack
  • Open Flow
  • OPNFV
  • ONOS
  • OvS
  • Project Floodlight
  • Open Networking
  • and on and on….

For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service  marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.

For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.

The connection and integration of cloud services with telecom or network services is quite clear.  At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.

While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do.  In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”

Oh my…

PTC 2015 Focuses on Submarine Cables and SDNs

PTC 2015 In an informal survey of words used during seminars and discussions, two main themes are emerging at the Pacific Telecommunications Council’s 2015 annual conference.  The first, as expected, is development of more submarine cable capacity both within the Pacific, as well as to end points in ANZ, Asia, and North America.  The second, software defined networking (SDN), as envisioned could quickly begin to re-engineer the gateway and carrier hotel interconnection business.

New cable development, including Arctic Fiber, Trident, SEA-US, and APX-E have sparked a lot of interest.  One discussion at Sunday morning’s Submarine Cable Workshop highlighted the need for Asian (and other regions) need to find ways to bypass the United States, not just for performance issues, but also to avoid US government agencies from intercepting and potentially exploiting data hitting US networks and data systems.

The bottom line with all submarine cable discussions is the need for more, and more, and more cable capacity.  Applications using international communications capacity, notably video, are consuming at rates which are driving fear the cable operators won’t be able to keep up with capacity demands.

However perhaps the most interesting, and frankly surprising development is with SDNs in the meet me room (MMR).  Products such as PacNet’s PEN (PacNet Enabled Network) are finally putting reality into on-demand, self-service circuit provisioning, and soon cloud computing capacity provisioning within the MMR.  Demonstrations showed how a network, or user, can provisioning from 1Mbps to 10Gbps point to point within a minute.

In the past on demand provisioning of interconnections was limited to Internet Exchange Points.  Fiber cross connects, VLANs, and point to point Ethernet connections.  Now, as carrier hotels and MMRs acknowledge the need for rapid provisioning of elastic (rapid addition and deletion of bandwidth or capacity) resources, the physical cross connect and IXP peering tools will not be adequate for market demands in the future.

SDN models, such as PacNet’s PEN, are a very innovative step towards this vision.  The underlying physical interconnection infrastructure simply becomes a software abstraction for end users (including carriers and networks) allowing circuit provisioning in a matter of minutes, rather than days.

The main requirement for full deployment is to “sell” carriers and networks on the concept, as key success factors will revolve around the network effect of participant communities.  Simply, the more connecting and participating networks within the SDN “community,” the more value the SDN MMR brings to a facility or market.

A great start to PTC 2015.  More PTC 2015 “sidebars” on Tuesday.

You Want Money for a Data Center Buildout?

Yield to Cloud A couple years ago I attended several “fast pitch” competitions and events for entrepreneurs in Southern California, all designed to give startups a chance to “pitch” their ideas in about 60 seconds to a panel of representatives from the local investment community.  Similar to television’s “Shark Tank,” most of the ideas pitches were harshly critiqued, with the real intent of assisting participating entrepreneurs in developing a better story for approaching investors and markets.

While very few of the pitches received a strong, positive response, I recall one young guy who really set the panel back a step in awe.  The product was related to biotech, and the panel provided a very strong, positive response to the pitch.

Wishing to dig a bit deeper, one of the panel members asked the guy how much money he was looking for in an investment, and how he’d use the money.

“$5 million he responded,” with a resounding wave of nods from the panel.  “I’d use around $3 million for staffing, getting the office started, and product development.”  Another round of positive expressions.  “And then we’d spend around $2 million setting up in a data center with servers, telecoms, and storage systems.”

This time the panel looked as if they’d just taken a crisp slap to the face.  After a moment of collection, the panel spokesman launched into a dress down of the entrepreneur stating “I really like the product, and think you vision is solid.  However, with a greater then 95% chance of your company going bust within the first year, I have no desire to be stuck with $2 million worth of obsolete computer hardware, and potentially contract liabilities once you shut down your data center.  You’ve got to use your head and look at going to Amazon for your data center capacity and forget this data center idea.”

Now it was the entire audience’s turn to take a pause.

In the past IT managers really placed buying and controlling their own hardware, in their own facility, as a high priority – with no room for compromise.  For perceptions of security, a desire for personal control, or simply a concern that outsourcing would limit their own career potential, sever closets and small data centers were a common characteristic of most small offices.

At some point a need to have proximity to Internet or communication exchange points, or simple limitations on local facility capacity started forcing a migration of enterprise data centers into commercial colocation.  For the most part, IT managers still owned and controlled any hardware outsourced into the colocation facility, and most agreed that in general colocation facilities offered higher uptime, fewer service disruptions, and good performance, in particular for eCommerce sites.

Now we are at a new IT architecture crossroads.  Is there really any good reason for a startup, medium, or even large enterprise to continue operating their own data center, or even their own hardware within a colocation facility?  Certainly if the average CFO or business unit manager had their choice, the local data center would be decommissioned and shut down as quickly as possible.  The CAPEX investment, carrying hardware on the books for years of depreciation, lack of business agility, and dangers of business continuity and disaster recovery costs force the question of “why don’t we just rent IT capacity from a cloud service provider?”

Many still question the security of public clouds, many still question the compliance issues related to outsourcing, and many still simply do not want to give up their “soon-to-be-redundant” data center jobs.

Of course it is clear most large cloud computing companies have much better resources available to manage security than a small company, and have made great advances in compliance certifications (mostly due to the US government acknowledging the role of cloud computing and changing regulations to accommodate those changes).  If we look at the US Government’s FedRAMP certification program as an example, security, compliance, and management controls are now a standard – open for all organizations to study and adopt as appropriate.

So we get back to the original question, what would justify a company in continuing to develop data centers, when a virtual data center (as the first small step in adopting a cloud computing architecture) will provide better flexibility, agility, security, performance, and lower cost than operating a local of colocated IT physical infrastructure?  Sure, exceptions exist, including some specialized interfaces on hardware to support mining, health care, or other very specialized activities.  However if you re not in the computer or switch manufacturing business – can you really continue justifying CAPEX expenditures on IT?

IT is quickly becoming a utility.  As a business we do not plan to build roads, build water distribution, or build our own power generation plants.  Compute, telecom, and storage resources are becoming a utility, and IT managers (and data center / colocation companies) need to do a comprehensive review of their business and strategy, and find a way to exploit this technology reality, rather than allow it to pass us by.

Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

Developing a New “Service-Centric IT Value Chain”

imageAs IT professionals we have been overwhelmed with different standards for each component of architecture, service delivery, governance, security, and operations.  Not only does IT need to ensure technical training and certification, but it is also desired to pursue certifications in ITIL, TOGAF, COBIT, PMP, and a variety of other frameworks – at a high cost in both time and money.

Wouldn’t it be nice to have an IT framework or reference architecture which brings all the important components of each standard or recommendation into a single model which focuses on the most important aspect of each existing model?

The Open Group is well-known for publishing TOGAF (The Open Group Architecture Framework), in addition to a variety of other standards and frameworks related to Service-Oriented Architectures (SOA), security, risk, and cloud computing.  In the past few years, recognizing the impact of broadband, cloud computing, SOAs, and need for a holistic enterprise architecture approach to business and IT, publishing many common-sense, but powerful recommendations such as:

  • TOGAF 9.1
  • Open FAIR (Risk Analysis and Assessment)
  • SOCCI (Service-Oriented Cloud Computing Infrastructure)
  • Cloud Computing
  • Open Enterprise Security Architecture
  • Document Interchange Reference Model (for interoperability)
  • and others.

The open Group’s latest project intended to streamline and focus IT systems development is called the “IT4IT” Reference Architecture.  While still in the development, or “snapshot” phase, IT4IT is surprisingly easy to read, understand, and most importantly logical.

“The IT Value Chain and IT4IT Reference Architecture represent the IT service lifecycle in a new and powerful way. They provide the missing link between industry standard best practice guides and the technology framework and tools that power the service management ecosystem. The IT Value Chain and IT4IT Reference Architecture are a new foundation on which to base your IT operating model. Together, they deliver a welcome blueprint for the CIO to accelerate IT’s transition to becoming a service broker to the business.” (Open Group’s IT4IT Reference Architecture, v 1.3)

The IT4IT Reference Architecture acknowledges changes in both technology and business resulting from the incredible impact Internet and automation have had on both enterprise and government use of information and data.  However the document also makes a compelling case that IT systems, theory, and operations have not kept up with either existing IT support technologies, nor the business visions and objectives IT is meant to serve.

IT4IT’s development team is a large, global collaborative effort including vendors, enterprise, telecommunications, academia, and consulting companies.  This helps drive a vendor or technology neutral framework, focusing more on running IT as a business, rather than conforming to a single vendor’s product or service.  Eventually, like all developing standards, IT4IT may force vendors and systems developers to provide a solid model and framework for developing business solutions, which will support greater interoperability and data sharing between both internal and external organizations.

The visions and objectives for IT4IT include two major components, which are the IT Value Chain and IT4IT Reference Architecture.  Within the IT4IT Core are sections providing guidance, including:

  • IT4IT Abstractions and Class Structures
  • The Strategy to Portfolio Value Stream
  • The Requirement to Deploy Value Stream
  • The Request to Fulfill Value Stream
  • The Detect to Correct Value Stream

Each of the above main sections have borrowed from, or further developed ideas and activities from within ITIL, COBIT, and  TOGAF, but have taken a giant leap including cloud computing, SOAs, and enterprise architecture into the product.

As the IT4IT Reference Architecture is completed, and supporting roadmaps developed, the IT4IT concept will no doubt find a large legion of supporters, as many, if not most, businesses and IT professionals find the certification and knowledge path for ITIL, COBIT, TOGAF, and other supporting frameworks either too expensive, or too time consuming (both in training and implementation).

Take a look at IT4IT at the Open Group’s website, and let us know what you think.  Too light?  Not needed?  A great idea or concept?  Let us know.

NexGen Cloud Conference in San Diego – Missing the Point

The NexGen Cloud Computing Conference kicked off on Thursday in San Diego with a fair amount of hype and a lot of sales people.  Granted the intent of the conference is for cloud computing vendors to find and NexGen Cloud Conference develop either sales channels, or business development opportunities within the market.

As an engineer, the conference will probably result in a fair amount of frustration, but will at least provide a level of awareness in how an organization’s sales, marketing, and business teams are approaching their vision of a cloud computing product or service delivery.

However, one presentation stood out.  Terry Hedden, from Marketopia, made some very good points.  His presentation was entitled “How to Build a Successful Cloud Practice.”  While the actual presentation is not so important, he made several points, which I’ll refer to as “Heddonisms,” which struck me as important enough, or amusing enough, to record.

Some of the following “Heddonisms” were paraphrased either due to my misunderstanding of his point, or because I thought the point was so profound it needed a bit of additional highlight.

Heddonisms for the Cloud Age:

  • Entire software companies are transitioning to SaaS development.  Lose the idea of licensed software – think of subscription software.
  • Integrators and consultants have a really good future – prepare yourself.
  • The younger generation does not attend tech conferences.  Only old people who think they can sell things, get new jobs, or are trying to put some knowledge to the junk they are selling (the last couple of points are mine).
  • Companies selling hosted SaaS products and services are going to kill those who still hang out at the premise.
  • If you do not introduce cloud services to your customers. your competitor will introduce cloud to your customers.
  • If you are not aspiring to be a leader in cloud, you are not relevant.
  • There is little reason to go into the IaaS business yourself.  Let the big guys build infrastructure – you can make higher margins selling their stuff.  In general, IaaS companies are really bad sales organizations (also mine…).
  • Budgets for security at companies like Microsoft are much higher than for smaller companies.  Thus, it is likely Microsoft’s ability to design, deploy, monitor, and manage secure infrastructure is much higher than the average organization.
  • Selling cloud is easy – you are able to relieve your customers of most up front costs (like buying hardware, constructing data centers, etc.).
  • If you simply direct your customer to Microsoft or Google’s website for a solution, then you are adding no value to our customer.
  • If you hear the word “APP” come up in a conversation, just turn around and run away.
  • If you assist a company in a large SaaS implementation (successfully), they will likely be your customer for life.
  • Don’t do free work or consulting – never (this really hurt me to hear – guilty as charged…).
  • Customers have one concern, and one concern only – Peace of Mind.  Make their pains go away, and you will be successful.  Don’t give them more problems.
  • Customers don’t care what is behind the curtain (such as what kind of computers or routers you are using).  They only care about you taking the pain of stuff that doesn’t make them money away from their lives.
  • Don’t try to sell to IT guys and engineers.  Never.  Never. Never.
  • The best time to work with a company is when they are planning for their technology refresh cycles.

Heddon was great.  While he may have a bit of contempt for engineers (I have thick skin, I can live with the wounds), he provided a very logical and realistic view of how to approach selling and deploying cloud computing.

Now about missing the point.  Perhaps the biggest shortfall of the conference, in my opinion, is that most presentations and even vendor efforts solved only single silos of issues.  Nobody provided an integrated viewpoint of how cloud computing is actually just one tool an organization can use within a larger, planned, architecture.

No doubt I have become bigoted myself after several years of plodding through TOGAF, ITIL, COBIT, Risk Assessments, and many other formal IT-supporting frameworks.  Maybe a career in the military forced me into systems thinking and structured problem solving.  Maybe I lack a higher level of innovative thinking or creativity – but I crave a structured, holistic approach to IT.

Sadly, I got no joy at the NexGen Cloud Computing Conference.  But, I would have driven from LA to San Diego just for Heddon’s presentation and training session – that made the cost of conference and time a valuable investment.

Nurturing the Marriage of Cloud Computing and SOAs

In 2009 we began consulting jobs with governments in developing countries with the primary objective to consolidate data centers across government ministries and agencies into centralized, high capacity and quality data centers.  At the time, nearly all individual ministry or agency data infrastructure was built into either small computers rooms or server closets with some added “brute force” air conditioning, no backup generators, no data back up, superficial security, and lots of other ailments.

CC-SOA The vision and strategy was that if we consolidated inefficient, end of life, and high risk IT infrastructure into a standardized and professionally managed facility, national information infrastructure would not only be more secure, but through standardization, volume purchasing agreements, some server virtualization, and development of broadband infrastructure most of the IT needs of government would be easily fulfilled.

Then of course cloud computing began to mature, and the underlying technologies of Infrastructure as a Service (IaaS) became feasible.  Now, not only were the governments able to decommission inefficient and high-risk IS environments, they would also be able to build virtual data centers  with levels of on-demand compute, storage, and network resources.  Basic data center replacement.

Even those remaining committed “server hugger” IT managers and fiercely independent governmental organizations cloud hardly argue the benefits of having access to disaster recovery storage capacity though the centralized data center.

As the years passed, and we entered 2014, not only did cloud computing mature as a business model, but senior management began to increase their awareness of various aspects of cloud computing, including the financial benefits, standardization of IT resources, the characteristics of cloud computing, and potential for Platform and Software as a Service (PaaS/SaaS) to improve both business agility and internal decision support systems.

At the same time, information and organizational architecture, governance, and service delivery frameworks such as TOGAF, COBIT, ITIL, and Risk Analysis training reinforced the value of both data and information within an organization, and the need for IT systems to support higher level architectures supporting decision support systems and market interactions (including Government to Government, Business, and Citizens for the public sector) .

2015 will bring cloud computing and architecture together at levels just becoming comprehensible to much of the business and IT world.  The open Group has a good first stab at building a standard for this marriage with their Service-Oriented Cloud Computing Infrastructure (SOCCI). According to the SOCCI standard,

“Infrastructure is a foundational element for enterprise architecture. Infrastructure has been  traditionally provisioned in a physical manner. With the evolution of virtualization technologies  and application of service-orientation to infrastructure, it can now be offered as a service.

Service-orientation principles originated in the business and application architecture arena. After  repeated, successful application of these principles to application architecture, IT has evolved to  extending these principles to the infrastructure.”

At first glance the SOCII standard appears to be a document which creates a mapping between enterprise architecture (TOGAF) and cloud computing.  At second glance the SOCCI standard really steps towards tightening the loose coupling of standard service-oriented architectures through use of cloud computing tools included with all service models (IaaS/PaaS/SaaS).

The result is an architectural vision which is easily capable of absorbing existing IT requirements, as well as incorporating emerging big data analytics models, interoperability, and enterprise architecture.

Since the early days of 2009 discussion topics with government and enterprise customers have shown a marked transition from simply justifying decommissioning of high risk data centers to how to manage data sharing, interoperability, or the potential for over standardization and other service delivery barriers which might inhibit innovation – or ability of business units to quickly respond to rapidly changing market opportunities.

2015 will be an exciting year for information and communications technologies.  For those of us in the consulting and training business, the new year is already shaping up to be the busiest we have seen.

Now that We Have Adopted IaaS…

Providing guidance or consulting to organizations on cloud computing topics can be really easy, or really tough.  In the past most of the initial engagement was dedicated to training and building awareness with your customer.  The next step was finding a high value, low risk application or service that could be moved to Infrastructure as a Service (IaaS) to solve an immediate problem, normally associated with disaster recovery or data backups.

Service Buss and DSS As the years have continued, dynamics changed.  On one hand, IT professionals and CIOs began to establish better knowledge of what virtualization, cloud computing, and outsourcing could do for their organization.  CFOs became aware of the financial potential of virtualization and cloud computing, and a healthy dialog between IT, operations, business units, and the CFO.

The “Internet Age” has also driven global competition down to the local level, forcing nearly all organizations to respond more rapidly to business opportunities.  If a business unit cannot rapidly respond to the opportunity, which may require product and service development, the opportunity can be lost far more quickly than in the past.

In the old days, procurement of IT resources could require a fairly lengthy cycle.  In the Internet Age, if an IT procurement cycle takes > 6 months, there is probably little chance to effectively meet the greatly shortened development cycle competitors in other continents – or across the street may be able to fulfill.

With IaaS the procurement cycle of IT resources can be within minutes, allowing business units to spend far more time developing products, services, and solutions, rather than dealing with the frustration of being powerless to respond to short window opportunities.  This is of course addressing the essential cloud characteristics of Rapid Elasticity and On-Demand Self-Service.

In addition to on-demand and elastic resources, IaaS has offered nearly all organizations the option of moving IT resources into either public or private cloud infrastructure.  This has the benefit of allowing data center decommissioning, and re-commissioning into a virtual environment.  The cost of operating data centers, maintaining data centers and IT equipment, and staffing data centers vs. outsourcing that infrastructure into a cloud is very interesting to CFOs, and a major justification for replacing physical data centers with virtual data centers.

The second dynamic, in addition to greater professional knowledge and awareness of cloud computing, is the fact we are starting to recruit cloud-aware employees graduating from universities and making their first steps into careers and workforce.  With these “cloud savvy” young people comes deep experience with interoperable data, social media, big data, data analytics, and an intellectual separation between access devices and underlying IT infrastructure.

The Next Step in Cloud Evolution

OK, so we all are generally aware of the components of IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS).  Let’s have a quick review of some standout features supported or enabled by cloud:

  • Increased standardization of applications
  • Increased standardization of data bases
  • Federation of security systems (Authentication and Authorization)
  • Service busses
  • Development of other common applications (GIS, collaboration, etc.)
  • Transparency of underlying hardware

Now let’s consider the need for better, real-time, accurate decision support systems (DSS).  Within any organization the value of a DSS is dependent on data integrity, data access (open data within/without an organization), and single-source data.

Frameworks for developing an effective DSS are certainly available, whether it is TOGAF, the US Federal Enterprise Architecture Framework (FEAF), interoperability frameworks, and service-oriented architectures (SOA).  All are fully compatible with the tools made available within the basic cloud service delivery models (IaaS, PaaS, SaaS).

The Open Group (same organization which developed TOGAF) has responded with their model of a Cloud Computing Service Oriented Infrastructure (SOCCI) Framework.  The SOCCI is identified as the marriage of a Service-Oriented Infrastructure and cloud computing.  The SOCCI also incorporates aspects of TOGAF into the framework, which may drive more credibility into a SOCCI architectural development process.

The expected result of this effort is for existing organizations dealing with departmental “silos” of IT infrastructure, data, and applications, a level of interoperability and DSS development based on service-orientation, using a well-designed underlying cloud infrastructure.  This data sharing can be extended beyond the (virtual) firewall to others in an organization’s trading or governmental community, resulting in  DSS which will become closer and closer to an architecture vision based on the true value of data produced, or made available to an organization.

While we most certainly need IaaS, and the value of moving to virtual data centers is justified by itself, we will not truly benefit from the potential of cloud computing until we understand the potential of data produced and available to decision makers.

The opportunity will need a broad spectrum of contributors and participants with awareness and training in disciplines ranging from technical capabilities, to enterprise architecture, to service delivery, and governance acceptable to a cloud-enabled IT world.

For those who are eagerly consuming training and knowledge in the above skills and knowledge, the future is anything but cloudy.  For those who believe in status quo, let’s hope you are close to pension and retirement, as this is your future.

%d bloggers like this: