The Trouble with IT Disintermediation

Disintermediation by Pacific-Tier CommunicationsI have a client who is concerned with some of their departments bypassing the organization’s traditional IT process, and going directly to cloud vendors for their IT resource needs.  Not really unique, as the cloud computing industry has really disrupted IT provisioning processes, not to mention near complete loss of control over configuration management databases and inventories.

IT service disintermediation occurs when end users cut out the middleman when procuring ICT services, and go directly to the service provider with an independent account. Disintermediation normally occurs when one of the following conditions exist:

  1. The end user desires to remain independent, for reasons of control, use of decentralized budgets, or simply individual pride.
  2. The organizational service provider does not have a suitable resource available to meet the end user’s needs
  3. The end user does not have confidence in the organizational service provider
  4. The organizational service provider has a suitable service, however is not able or willing to provision the service in order to meet the end user’s demands for timing, capacity, or other reasons.  This is often the result of a lengthy, bureaucratic process which is not agile, flexible, or promotes a “sense of urgency” to complete provisioning tasks.
  5. The organizational service provider is not able to, or is unwilling to accommodate “special” orders which fall out of the service provider’s portfolio.
  6. The organizational service provider does not respond to rapidly changing market, technology, and usage opportunities, with the result of creating barriers for the business units to compete or respond to external conditions.

The result of this is pretty bad for any organization.  Some of the highlights of this failure may include:

  • Loss of control over IT budgets – decentralization of IT budget which do not fall within a strategic plan or policy cannot be controlled.
  • Inability to develop and maintain organizational relationships with select or approved vendors.  Vendors relish the potential of disrupting single points of contacts within large organizations, as it allows them to develop and sustain multiple high value contracts with the individual agencies, rather than falling within volume purchasing agreements, audits, standards, security, SLAs, training, and so on.
  • Individual applications will normally result in incompatible information silos.  While interoperability within an organization is a high priority, particularly when looking at service-orientation and organizational decision support systems, systems disintermediation will result in failure, or extreme difficulty in developing data sharing structure.
  • Poor Continuity of Operations and Disaster Management.  Undocumented, non-standard systems are normally not fully documented, and often are not made available to the Organization’s IT Management or support operations.  Thus, when disasters occur, there is a high risk of complete data loss in a disaster, or inability to quickly restore full services to the organization, customers, and general user base.
  • There is also difficulty in data/systems portability.  If/when a service provider fails to meet the expectation of the end user, decides to go out of business, or for some reason decides not to continue supporting the user, then the existing data and systems should be portable to another service provider (this is also within the NIST standard).

While there are certainly other considerations, this covers the main pain points disintermediation might present.

The next obvious question is how to best mitigate the condition.  This is a more difficult issue than in the past, as it is now so easy to establish an account and resources through cloud companies with a simple credit card, or aggressive sales person.In addition, the organizational service provider must follow standard architectural and governance processes, which includes continual review and improvement cycles.

As technology and organization priorities change, so must the policies change to be aware of, and accommodate reasonable change.  The end users must be fully aware of the products and services IT departments have to offer, and of course IT departments must have an aggressive sense of urgency in trying to respond and fulfill those requirements.

Responsibility falls in two areas; 1) Ensuring the organizational service provider is able to meet the needs of end users <or is able to find solutions in a timely manner to assist the end user>, and 2)  develop policies and processes which not only facilitate end user acquisition of resources, but also establishes accountability when those policies are not followed.

In addition, the organizational service provider must follow standard architectural and governance processes, which includes continual review and improvement cycles. As technology and organization priorities change, so must the policies change to be aware of, and accommodate reasonable change. The end users must be fully aware of the products and services IT departments have to offer, and of course IT departments must have an aggressive sense of urgency in trying to respond and fulfill those requirements.

Risk Management Strategies for IT Systems

Risk management has been around for a long time.  Financial managers run risk assessments for nearly all business models, and the idea of risk carries nearly as many definitions as the Internet.  However, for IT managers and IT professionals, risk management still frequently takes a far lower priority that other operations  and support activities.

For IT managers a good, simple definition for RISK may be from the Open FAIR model which states:

“Risk is defined as the probable frequency and magnitude of future loss”   (Open FAIR)

Risk management should follow a structured process acknowledging many aspects of the IT operations process, with special considerations for security and systems availability.

Risk Management Frameworks, such as Open FAIR, distill risk into a structure of probabilities, frequencies, and values.  Each critical system or process is considered independently, with a probability of disruption or loss event paired with a probable value.

It would not be uncommon for an organization to perform numerous risk assessments based on critical systems, identifying and correcting shortfalls as needed to mitigate the probability or magnitude of a potential event or loss.  Much like other frameworks used in the enterprise architecture process / framework, service delivery (such as ITIL), or governance, the objective is to produce a structured risk assessment and analysis approach, without becoming overwhelming.

IT risk management has been neglected in many organizations, possibly due to the rapid evolution of IT systems, including cloud computing and implementation of broadband networks.  When service disruptions occur, or security events occur, those organizations find themselves either unprepared for dealing with the loss magnitude of the disruptions, and a lack of preparation or mitigation for disasters may result in the organization never fully recovering from the event.

Fortunately processes and frameworks guiding a risk management process are becoming far more mature, and attainable by nearly all organizations.  The Open Group’s Open FAIR standard and taxonomy provide a very robust framework, as does ISACA’s Cobit 5 Risk guidance.

In addition, the US Government’s National Institute of Standards and Technology (NIST) provides open risk assessment and management guidance for both government and non-government users within the NIST Special Publication Series, including SP 800-30 (Risk Assessment), SP 800-37 (System Risk Management Framework), and SP 800-39 (Enterprise-Wide Risk Management).

ENISA also publishes a risk management process which is compliant with the ISO 13335 standard, and builds on ISO 27005..

What is the objective of going through the risk assessment and analysis process?  Of course it is to build mitigation controls, or build resistance to potential disruptions, threats, and events that would result in a loss to the company, or other direct and secondary stakeholders.

However, many organizations, particularly small to medium enterprises, either do not believe they have the resources to go through risk assessments, have no formal governance process, no formal security management process, or simply believe spending the time on activities which do not directly support rapid growth and development of the company continue to be at risk.

As managers, leaders, investors, and customers we have an obligation to ensure our own internal risk is assessed and understood, as well as from the viewpoint of customers or consumers that our suppliers and vendors are following formal risk management processes.  In a fast, agile, global, and unforgiving market, the alternative is not pretty.

Putting Enterprise Architecture Principles to Work

This week brought another great consulting gig, working with old friends and respected colleagues.  The challenge driving the consultation was brainstorming a new service for their company, and how best to get it into operation.

image The new service vision was pretty good.  The service would fill a hole, or shortfall in the industry which would better enable their customers to compete in markets both in the US and abroad.  However the process of planning and delivering this service, well, simply did not exist.

The team’s sense of urgency to deliver the service was high, based on a perception if they did not move quickly, then they would suffer an opportunity loss while competitors moved quickly to fill the service need themselves.

While it may have been easy to “jump on the bandwagon” and share the team’s enthusiasm, they lacked several critical components of delivering a new service, which included:

  • No specific product or service definition
  • No, even high level, market analysis or survey
  • No cost analysis or revenue projection
  • No risk analysis
  • No high level implementation plan or schedule

“We have great ideas from vendors, and are going to try and put together a quick pilot test as quickly as possible.  We are trying to gather a few of our customers to participate right now” stated one of the team.

At that point, reluctantly, I had to put on the brakes.  While not making any attempt to dampen the team’s enthusiasm, to promote a successful service launch I forced them to consider additional requirements, such as:

  • The need to build a business case
  • The need for integration of the service into existing back office systems, such as inventory, book-to-bank, OSS, management and monitoring, finance and billing, executive dashboards (KPIs, service performance, etc.)
  • Staffing and training requirements
  • Options of in-sourcing, outsourcing, or partnering to deliver the service
  • Developing RFPs (even simple RFPs) to help evaluate vendor options
  • and a few other major items

“That just sounds like too much work.  If we need to go through all that, we’ll never deliver the service.  Better to just work with a couple vendors and get it on the street.”

I should note the service would touch many, many people in the target industry, which is very tech-centric.  Success or failure of the service could have a major impact on the success or failure of many in the industry.

Being a card-carrying member of the enterprise architecture cult, and a proponent of other IT-related frameworks such as ITIL, COBIT, Open FAIR, and other business modeling, there are certainly bound to be conflicts between following a very structured approach to building business services, and the need for agile creativity and innovation.

In this case, asking the team to indulge me for a few minutes while I mapped out a simple, structured approach to developing and delivering the envisioned service.  By using simplified version of the TOGAF Architecture Development Method (ADM), and adding a few lines related to standards and service development methodology, such as the vision –> AS-IS –> gap analysis –> solutions development model, it did not take long for the team to reconsider their aggressive approach.

When preparing a chart of timelines using the “TOGAF Light,” or EA framework, the timelines were oddly similar to the aggressive approach.  The main difference being at the end of the EA approach the service not only followed a very logical, disciplined, measurable, governable, and flexible service.

Sounds a bit utopian, but in reality we were able to get to the service delivery with a better product, without sacrificing any innovation, agility, or market urgency.

This is the future of IT.  As we continue to move away from the frenzy of service deliveries of the Internet Age, and begin focusing on the business nature, including role IT plays in critical global infrastructures, the disciplines of following product and service development and delivery will continue to gain importance.

Adopting Critical Thinking in Information Technology

The scenario is a data center, late on a Saturday evening.  A telecom distribution system fails, and operations staff are called in from their weekend to quickly find the problem and restore operations as quickly as possible.

Critical Thinking As time goes on,  many customers begin to call in, open trouble tickets, upset at systems outages and escalating customer disruptions.

The team spends hours trying to fix a rectifier providing DC power to a main telecommunications distribution switch, and start by replacing each systems component one-by-one hoping to find the guilty part.  The team grows very frustrated due to not only fatigue, but also their failure in being able to s0lve the problem.  After many hours the team finally realizes there is no issue with either the telecom switch, or rectifier supplying DC power to the switch.  What could the problem be?

Finally, after many hours of troubleshooting, chasing symptoms, and hit / miss component replacements,  an electrician discovers there is a panel circuit that has failed due to many years of misuse (for those electrical engineers it was actually a circuit that oxidized and shorted due to “over-amping” the circuit – without preventive maintenance or routine checks).

The incident highlighted a reality – the organization working on the problem had very little critical thinking or problem solving skills.  They chased each obvious symptom, but never really addressed or successfully identified the underlying problem.  Great technicians, poor critical thinkers.   And a true story.

While this incident was a data center-related trouble shooting fail, we frequently fail to use good critical thinking in not only trouble shooting, but also developing opportunities and solutions for our business users and customers.

A few years ago I took a break from the job and spent some time working on personal development.  In addition to collecting certifications in TOGAF, ITIL, and other aerchitecture-related subjects I added a couple of additional classes, including Kepner-Tregoe (K-T) and Kepner-Fourie (K-F) Critical Thinking and Problem Solving Courses.

Not bad schools of thought, and a good refresher course reminding me of those long since forgotten systems management skills learned in graduate school – heck, nearly 30 years ago.

Here is the problem: IT systems and business use of technologies have rapidly developed during the past 10 years, and that rate of change appears to be accelerating.  Processes and standards developed 10, 15, or 20 years ago are woefully inadequate to support much of our technology and business-related design, development, and operations.  Tacit knowledge, tacit skills, and gut feelings cannot be relied on to correctly identify and solve problems we encounter in our fast-paced IT world.

Keep in mind, this discussion is not only related to problem solving, but also works just as well when considering new product or solution development for new and emerging business opportunities or challenges.

Critical Thinking forces us to know what a problem (or opportunity) is, know and apply the differences between inductive and deductive reasoning, identify premises and conclusions, good and bad arguments, and acknowledge issue descriptions and explanations (Erlandson).

Critical Thinking “religions” such as Kepner-Fourie (K-F) provide a process and model for solving problems.  Not bad if you have the time to create and follow heavy processes, or even better can automate much of the process.  However even studying extensive system like K-T and K-F will continue to drive the need for establishing an appropriate system for responding to events.

Regardless of the approach you may consider, repeated exposure to critical thinking concepts and practice will force us to  intellectually step away from chasing symptoms or over-reliance on tacit knowledge (automatic thinking) when responding to problems and challenges.

For IT managers, think of it as an intellectual ITIL Continuous Improvement Cycle – we always need to exercise our brains and thought process.  Status quo, or relying on time-honored solutions to problems will probably not be sufficient to bring our IT organizations into the future.  We need to continue ensuring our assumptions are based on facts, and avoid undue influence – in particular by vendors, to ensure our stakeholders have confidence in our problem or solution development process, and we have a good awareness of business and technology transformations impacting our actions.

In addition to those courses and critical thinking approaches listed above, exposure and study of those or any of the following can only help ensure we continue to exercise and hone our critical thinking skills.

  • A3 Management
  • Toyota Kata
  • PDSA (Plan-Do-Adjust-Study)

And lots of other university or related courseware.  For myself, I keep my interest alive by reading an occasional eBook (Such as “How to Think Clearly, A Guide to Critical Thinking” by Doug Erlandson – great to read during long flights), and Youtube videos.

What do you “think?”

Developing a New “Service-Centric IT Value Chain”

imageAs IT professionals we have been overwhelmed with different standards for each component of architecture, service delivery, governance, security, and operations.  Not only does IT need to ensure technical training and certification, but it is also desired to pursue certifications in ITIL, TOGAF, COBIT, PMP, and a variety of other frameworks – at a high cost in both time and money.

Wouldn’t it be nice to have an IT framework or reference architecture which brings all the important components of each standard or recommendation into a single model which focuses on the most important aspect of each existing model?

The Open Group is well-known for publishing TOGAF (The Open Group Architecture Framework), in addition to a variety of other standards and frameworks related to Service-Oriented Architectures (SOA), security, risk, and cloud computing.  In the past few years, recognizing the impact of broadband, cloud computing, SOAs, and need for a holistic enterprise architecture approach to business and IT, publishing many common-sense, but powerful recommendations such as:

  • TOGAF 9.1
  • Open FAIR (Risk Analysis and Assessment)
  • SOCCI (Service-Oriented Cloud Computing Infrastructure)
  • Cloud Computing
  • Open Enterprise Security Architecture
  • Document Interchange Reference Model (for interoperability)
  • and others.

The open Group’s latest project intended to streamline and focus IT systems development is called the “IT4IT” Reference Architecture.  While still in the development, or “snapshot” phase, IT4IT is surprisingly easy to read, understand, and most importantly logical.

“The IT Value Chain and IT4IT Reference Architecture represent the IT service lifecycle in a new and powerful way. They provide the missing link between industry standard best practice guides and the technology framework and tools that power the service management ecosystem. The IT Value Chain and IT4IT Reference Architecture are a new foundation on which to base your IT operating model. Together, they deliver a welcome blueprint for the CIO to accelerate IT’s transition to becoming a service broker to the business.” (Open Group’s IT4IT Reference Architecture, v 1.3)

The IT4IT Reference Architecture acknowledges changes in both technology and business resulting from the incredible impact Internet and automation have had on both enterprise and government use of information and data.  However the document also makes a compelling case that IT systems, theory, and operations have not kept up with either existing IT support technologies, nor the business visions and objectives IT is meant to serve.

IT4IT’s development team is a large, global collaborative effort including vendors, enterprise, telecommunications, academia, and consulting companies.  This helps drive a vendor or technology neutral framework, focusing more on running IT as a business, rather than conforming to a single vendor’s product or service.  Eventually, like all developing standards, IT4IT may force vendors and systems developers to provide a solid model and framework for developing business solutions, which will support greater interoperability and data sharing between both internal and external organizations.

The visions and objectives for IT4IT include two major components, which are the IT Value Chain and IT4IT Reference Architecture.  Within the IT4IT Core are sections providing guidance, including:

  • IT4IT Abstractions and Class Structures
  • The Strategy to Portfolio Value Stream
  • The Requirement to Deploy Value Stream
  • The Request to Fulfill Value Stream
  • The Detect to Correct Value Stream

Each of the above main sections have borrowed from, or further developed ideas and activities from within ITIL, COBIT, and  TOGAF, but have taken a giant leap including cloud computing, SOAs, and enterprise architecture into the product.

As the IT4IT Reference Architecture is completed, and supporting roadmaps developed, the IT4IT concept will no doubt find a large legion of supporters, as many, if not most, businesses and IT professionals find the certification and knowledge path for ITIL, COBIT, TOGAF, and other supporting frameworks either too expensive, or too time consuming (both in training and implementation).

Take a look at IT4IT at the Open Group’s website, and let us know what you think.  Too light?  Not needed?  A great idea or concept?  Let us know.

Nurturing the Marriage of Cloud Computing and SOAs

In 2009 we began consulting jobs with governments in developing countries with the primary objective to consolidate data centers across government ministries and agencies into centralized, high capacity and quality data centers.  At the time, nearly all individual ministry or agency data infrastructure was built into either small computers rooms or server closets with some added “brute force” air conditioning, no backup generators, no data back up, superficial security, and lots of other ailments.

CC-SOA The vision and strategy was that if we consolidated inefficient, end of life, and high risk IT infrastructure into a standardized and professionally managed facility, national information infrastructure would not only be more secure, but through standardization, volume purchasing agreements, some server virtualization, and development of broadband infrastructure most of the IT needs of government would be easily fulfilled.

Then of course cloud computing began to mature, and the underlying technologies of Infrastructure as a Service (IaaS) became feasible.  Now, not only were the governments able to decommission inefficient and high-risk IS environments, they would also be able to build virtual data centers  with levels of on-demand compute, storage, and network resources.  Basic data center replacement.

Even those remaining committed “server hugger” IT managers and fiercely independent governmental organizations cloud hardly argue the benefits of having access to disaster recovery storage capacity though the centralized data center.

As the years passed, and we entered 2014, not only did cloud computing mature as a business model, but senior management began to increase their awareness of various aspects of cloud computing, including the financial benefits, standardization of IT resources, the characteristics of cloud computing, and potential for Platform and Software as a Service (PaaS/SaaS) to improve both business agility and internal decision support systems.

At the same time, information and organizational architecture, governance, and service delivery frameworks such as TOGAF, COBIT, ITIL, and Risk Analysis training reinforced the value of both data and information within an organization, and the need for IT systems to support higher level architectures supporting decision support systems and market interactions (including Government to Government, Business, and Citizens for the public sector) .

2015 will bring cloud computing and architecture together at levels just becoming comprehensible to much of the business and IT world.  The open Group has a good first stab at building a standard for this marriage with their Service-Oriented Cloud Computing Infrastructure (SOCCI). According to the SOCCI standard,

“Infrastructure is a foundational element for enterprise architecture. Infrastructure has been  traditionally provisioned in a physical manner. With the evolution of virtualization technologies  and application of service-orientation to infrastructure, it can now be offered as a service.

Service-orientation principles originated in the business and application architecture arena. After  repeated, successful application of these principles to application architecture, IT has evolved to  extending these principles to the infrastructure.”

At first glance the SOCII standard appears to be a document which creates a mapping between enterprise architecture (TOGAF) and cloud computing.  At second glance the SOCCI standard really steps towards tightening the loose coupling of standard service-oriented architectures through use of cloud computing tools included with all service models (IaaS/PaaS/SaaS).

The result is an architectural vision which is easily capable of absorbing existing IT requirements, as well as incorporating emerging big data analytics models, interoperability, and enterprise architecture.

Since the early days of 2009 discussion topics with government and enterprise customers have shown a marked transition from simply justifying decommissioning of high risk data centers to how to manage data sharing, interoperability, or the potential for over standardization and other service delivery barriers which might inhibit innovation – or ability of business units to quickly respond to rapidly changing market opportunities.

2015 will be an exciting year for information and communications technologies.  For those of us in the consulting and training business, the new year is already shaping up to be the busiest we have seen.

%d bloggers like this: