The Changing Role of IT Professionals

Information Technology is a great field. With technology advancing at the speed of sound, there is never a period when IT becomes boring, or hits an intellectual wall. New devices, new software, more network bandwidth, and new opportunities to make all this technology do great things for our professional and private lives.

image  Or, it becomes a frightening professional and intellectual cyclone which threatens to make our jobs obsolete, or diluted due to business units accessing IT resources via a web page and credit card, bypassing the IT department entirely.

One of the biggest challenges IT managers have traditionally encountered is the need for providing both process, as well as utility to end users and supported departments or divisions within the organization. It is easy to get tied down in a virtual mountain of spreadsheets, trouble tickets, and unhappy users while innovation races past.

The Role of IT in Future Organizations

In reality, the technology component of IT is the easy part. If, for example, I decide that it is cost-effective to transition the entire organization to a Software as a Service (SaaS) application such as MS 365, it is a pretty easy business case to bring to management.

But more questions arise, such as does MS 365 give business users within the organization sufficient utility, and creative tools, to help solve business challenges and opportunities, or is it simply a new and cool application (in the opinion of the IT guys…) that IT guys find interesting?

Bridging the gap between old IT and the new world does not have to be too daunting. The first step is simply understanding and accepting the fact internal data center are going away in favor of virtualized cloud-enabled infrastructure. In the long term Software as a Service and Platform as a Service-enabled information, communication, and service utilities will begin to eliminate even the most compelling justifications for physical or virtual servers.

End user devices become mobile, with the only real requirement being a high definition display, input device, and high speed network connection (not this does not rely on “Internet” connections). Applications and other information and decision support resources are accessed someplace in the “cloud,” relieving the user from the burden of device applications and storage.

The IT department is no longer responsible for physical infrastructure

If we consider disciplines such as TOGAF (The open Group Architecture Framework), ITIL (Service Delivery and Management Framework), or COBIT (Governance and Holistic Organizational Enablement), a common theme emerges for IT groups.

IT organizations must become full members of an organization’s business team

If we consider the potential of systems integration, interoperability, and exploitation of large data (or “big data”) within organization’s, and externally among trading partners, governments, and others, the need for IT managers and professionals to graduate from the device world to the true information management world becomes a great career and future opportunity.

But this requires IT professionals to reconsider those skills and training needed to fully become a business team member and contributor to an organization’s strategic vision for the future.  Those skills include enterprise architecture, governance modeling, data analytics, and a view of standards and interoperability of data.  The value of a network routing certification, data center facility manager, or software installer will edge towards near zero within a few short years.

Harsh, but true.  Think of the engineers who specialized in digital telephone switches in the 1990s and early 2000s.  They are all gone.  Either retrained, repurposed, or unemployed.  The same future is hovering on the IT manager’s horizon.

So the call to action is simple.  If you are a mid-career IT professional, or new IT professional just entering the job market,  prepare yourself for a new age of IT.  Try to distance yourself from being stuck in a device-driven career path, and look at engaging and preparing yourself for contributing to the organization’s ability to fully exploit information from a business perspective, an architectural perspective, and fully indulge in a rapidly evolving and changing information services world.

Can IT Standards Facilitate Innovation?

ideaIT professionals continue to debate the benefits of standardization versus the benefits of innovation, and the potential of standards inhibiting engineer and software developer ability to develop creative solutions to business opportunities and challenges.  At the Open Group Conference in San Diego last week (3~5 February) the topic of  standards and innovation popped up not only in presentations, but also in sidebar conversations surrounding the conference venue.

In his presentation SOA4BT (Service-Oriented Architecture for Business Technology) – From Business Services to Realization,   Nikhil Kumar noted that with rigid standards there is “always a risk of service units creating barriers to business units.”  The idea is that service and IT organizations must align their intended use of standards with the needs of the business units.   Kumar further described a traditional cycle where:

  • Enterprise drivers establish ->
  • Business derived technical drivers, which encounter ->
  • Legacy and traditional constraints, which result in ->
  • “Business Required” technologies and technology (enabled) SOAs

Going through this cycle does not require a process with too much overhead, it is simply a requirement for ensuring the use of a standard, or standard business architecture framework  drive the business services groups (IT) into the business unit circle.  While IT is the source of many innovative ideas and deployments of emerging technologies, the business units are the ultimate benefactors of innovation, allowing the unit to address and respond to rapidly emerging opportunities or market requirements.

Standards come in a lot of shapes and sizes.  One standard may be a national or international standard, such as ISO 20000 (service delivery), NIST 800-53 (security), or BICSI 002-2011 (data center design and operations).  Standards may also be internal within an organization or industry, such as standardizing data bases, applications, data formats, and virtual appliances within a cloud computing environment.

In his presentation “The Implications of EA in New Audit Guidelines (COBIT5), Robert Weisman noted there are now more than 36,500 TOGAF (The Open Group Architecture Framework) certified practitioners worldwide, with more than 60 certified training organizations providing TOGAF certifications.  According to ITSMinfo.com, just in 2012 there were more than 263,000 ITIL Foundation certifications granted (for service delivery), and ISACA notes there were more than 4000 COBIT 5 certifications granted (for IT planning, implementation, and governance) in the same period.

With a growing number of organizations either requiring, or providing training in enterprise architecture, service delivery, or governance disciplines, it is becoming clear that organizations need to have a more structured method of designing more effective service-orientation within their IT systems, both for operational efficiency, and also for facilitating more effective decision support systems and performance reporting.  The standards and frameworks attempt to provide greater structure to both business and IT when designing technology toolsets and solutions for business requirements.

So use of standards becomes very effective for providing structure and guidelines for IT toolset and solutions development.  Now to address the issue of innovation, several ideas are important to consider, including:

  • Developing an organizational culture of shared vision, values, and goals
  • Developing a standardized toolkit of virtual appliances, interfaces, platforms, and applications
  • Accepting a need for continual review of existing tools, improvement of tools to match business requirements, and allow for further development and consideration when existing utilities and tools are not sufficient or adequate to task

Once an aligned vision of business goals is available and achieved, a standard toolset published, and IT and business units are better integrated as teams, additional benefits may become apparent.

  • Duplication of effort is reduced with the availability of standardized IT tools
  • Incompatible or non-interoperable organizational data is either reduced or eliminated
  • More development effort is applied to developing new solutions, rather than developing basic or standardized components
  • Investors will have much more confidence in management’s ability to not only make the best use of existing resources and budgets, but also the organization’s ability to exploit new business opportunities
  • Focusing on a standard set of utilities and applications, such as database software, will not only improve interoperability, but also enhance the organization’s ability to influence vendor service-level agreements and support agreements, as well as reduce cost with volume purchasing

Rather than view standards as an inhibitor, or barrier to innovation, business units and other organizational stakeholders should view standards as a method of not only facilitating SOAs and interoperability, but also as a way of relieving developers from the burden of constantly recreating common sets and libraries of underlying IT utilities.  If developers are free to focus their efforts on pure solutions development and responding to emerging opportunities, and rely on both technical and process standardization to guide their efforts, the result will greatly enhance an organization’s ability to be agile, while still ensuring a higher level of security, interoperability, systems portability, and innovation.

PTC 2015 Wraps Up with Strong Messages on SDNs and Automation

Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.

Carrier SDNs SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.

While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.

Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.

Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards.  In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:

  • How to differentiate their service offering
  • How to differentiate their operations environment
  • How to ensure industry-acceptable delivery and provisioning time cycles
  • How to deal with legacy deployments

Alexander also emphasized that as an industry we need to get away from physical wiring when possible.   With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.

The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire).  Some examples of actual and required deployments included:

  • A bandwidth on-demand marketplace
  • Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
  • Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services

Robust discussions on standards also spawned debated.  With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards.  Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.

During the PTC 2015 session entitled  “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:

  • Open Contrail
  • Open Daylight
  • Open Stack
  • Open Flow
  • OPNFV
  • ONOS
  • OvS
  • Project Floodlight
  • Open Networking
  • and on and on….

For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service  marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.

For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.

The connection and integration of cloud services with telecom or network services is quite clear.  At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.

While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do.  In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”

Oh my…

PTC 2015 Focuses on Submarine Cables and SDNs

PTC 2015 In an informal survey of words used during seminars and discussions, two main themes are emerging at the Pacific Telecommunications Council’s 2015 annual conference.  The first, as expected, is development of more submarine cable capacity both within the Pacific, as well as to end points in ANZ, Asia, and North America.  The second, software defined networking (SDN), as envisioned could quickly begin to re-engineer the gateway and carrier hotel interconnection business.

New cable development, including Arctic Fiber, Trident, SEA-US, and APX-E have sparked a lot of interest.  One discussion at Sunday morning’s Submarine Cable Workshop highlighted the need for Asian (and other regions) need to find ways to bypass the United States, not just for performance issues, but also to avoid US government agencies from intercepting and potentially exploiting data hitting US networks and data systems.

The bottom line with all submarine cable discussions is the need for more, and more, and more cable capacity.  Applications using international communications capacity, notably video, are consuming at rates which are driving fear the cable operators won’t be able to keep up with capacity demands.

However perhaps the most interesting, and frankly surprising development is with SDNs in the meet me room (MMR).  Products such as PacNet’s PEN (PacNet Enabled Network) are finally putting reality into on-demand, self-service circuit provisioning, and soon cloud computing capacity provisioning within the MMR.  Demonstrations showed how a network, or user, can provisioning from 1Mbps to 10Gbps point to point within a minute.

In the past on demand provisioning of interconnections was limited to Internet Exchange Points.  Fiber cross connects, VLANs, and point to point Ethernet connections.  Now, as carrier hotels and MMRs acknowledge the need for rapid provisioning of elastic (rapid addition and deletion of bandwidth or capacity) resources, the physical cross connect and IXP peering tools will not be adequate for market demands in the future.

SDN models, such as PacNet’s PEN, are a very innovative step towards this vision.  The underlying physical interconnection infrastructure simply becomes a software abstraction for end users (including carriers and networks) allowing circuit provisioning in a matter of minutes, rather than days.

The main requirement for full deployment is to “sell” carriers and networks on the concept, as key success factors will revolve around the network effect of participant communities.  Simply, the more connecting and participating networks within the SDN “community,” the more value the SDN MMR brings to a facility or market.

A great start to PTC 2015.  More PTC 2015 “sidebars” on Tuesday.

OSS Development for the Modern Data Center

Modern Data Centers are very complex environments.  Data center operators must have visibility into a wide range of integrated data bases, applications, and performance indicators to effectively understand and manage their operations and activities.

While each data center is different, all Data Centers share some common systems and common characteristics, including:

  • Facility inventories
  • Provisioning and customer fulfillment processes
  • Maintenance activities (including computerized maintenance management systems <CMMS>)
  • Monitoring
  • Customer management (including CRM, order management, etc.)
  • Trouble management
  • Customer portals
  • Security Systems (physical access entry/control and logical systems management)
  • Billing and Accounting Systems
  • Service usage records (power, bandwidth, remote hands, etc.)
  • Decision support system and performance management integration
  • Standards for data and applications
  • Staffing and activities-based management
  • Scheduling /calendar
  • etc…

Unfortunately, in many cases, the above systems are either done manually, have no standards, and had no automation or integration interconnecting individual back office components.  This also includes many communication companies and telecommunications carriers which previously either adhered, or claimed to adhere to Bellcore data and operations standards.

In some cases, the lack of integration is due to many mergers and acquisitions of companies which have unique, or non standard back office systems.  The result is difficulty in cross provisioning, billing, integrated customer management systems, and accounting – the day to day operations of a data center.

Modern data centers must have a high level of automation.  In particular, if a data center operator owns multiple facilities, it becomes very difficult to have a common look and feel or high level of integration allowing the company to offer a standardized product to their markets and customers.

Operational support systems or OSS, traditionally have four main components which include:

  • Support for process automation
  • Collection and storage for a wide variety of operational data
  • The use of standardized data structures and applications
  • And supporting technologies

With most commercial or public colocation and Data Centers customers and tenants organizations represent many different industries, products, and services.  Some large colocation centers may have several hundred individual customers.  Other data centers may have larger customers such as cloud service providers, content delivery networks, and other hosting companies.  While single large customers may be few, their internal hosted or virtual customers may also be at the scale of hundreds, or even thousands of individual customers.

To effectively support their customers Data Centers must have comprehensive OSS capabilities.  Given the large number of processes, data sources, and user requirements, the OSS should be designed and developed using a standard architecture and framework which will ensure OSS integration and interoperability.

OSS Components We have conducted numerous Interoperability Readiness surveys with both governments and private sector (commercial) data center operators during the past five years.  In more than 80% of surveys processes such as inventory management have been built within simple spreadsheets.  Provisioning of inventory items was normally a manual process conducted via e-mail or in some cases paper forms.

Provisioning, a manual process, resulted in some cases of double booked or double sold inventory items, as well as inefficient orders for adding additional customer-facing inventory or build out of additional data center space.

The problem often further compounded into additional problems such as missing customer billing cycles, accounting shortfalls, and management or monitoring system errors.

The new data center, including virtual data centers within cloud service providers, must develop better OSS tools and systems to accommodate the rapidly changing need for elasticity and agility in ICT systems.  This includes having as single window for all required items within the OSS.

Preparing an OSS architecture, based on a service-oriented architecture (SOA), should include use of ICT-friendly frameworks and guidance such as TOGAF and/or ITIL to ensure all visions and designs fully acknowledge and embrace the needs of each organization’s business owners and customers, and follow a comprehensive and structured development process to ensure those objectives are delivered.

Use of standard databases, APIs, service busses, security, and establishing a high level of governance to ensure a “standards and interoperability first” policy for all data center IT will allow all systems to communicate, share, reuse, and ultimately provide automated, single source data resources into all data center, management, accounting, and customer activities.

Any manual transfer of data between offices, applications, or systems must be prevented, preferring to integrate inventory, data collections and records, processes, and performance management indicators into a fully integrated and interoperable environment.  A basic rule of thought might be that if a human being has touched data, then the data likely has been either corrupted or its integrity may be brought into question.

Looking ahead to the next generation of data center services, stepping a bit higher up the customer service maturity continuum requires much higher levels of internal process and customer process automation.

Similar to NIST’s definition of cloud computing, stating the essential characteristics of cloud computing include “self-service provisioning,” “rapid elasticity,” ”measured services,” in addition to resource pooling and broadband access, it can be assumed that data center users of the future will need to order and fulfill services such as network interconnections, power, virtual space (or physical space), and other services through self-service, or on-demand ordering.

The OSS must strive to meet the following objectives:

  • Standardization
  • Interoperability
  • Reusable components and APIs
  • Data sharing

To accomplish this will require nearly all above mentioned characteristics of the OSS to have inventories in databases (not spreadsheets), process automation, and standards in data structure, APIs, and application interoperability.

And as the ultimate key success factor, management DSS will finally have potential for development of true dashboard for performance management, data analytics, and additional real-time tools for making effective organizational decisions.

Business Drives Transition to IT as a Utility

Is there a point where business can safely assume they have hit the limit of what traditional IT organizations have to offer?  In an Internet and data driven world, does IT simply lack the agility and depth needed to fulfill business requirements and need for innovation?

Parts of cloud computing have chimed a loud and painful wake up call for many IT managers.  Even at the most simple level, Infrastructure as a Service (IaaS), it might be fair to say this is simply a utility to accelerate data center imagedecommissioning, and the process of physically decoupling underlying compute, storage, and network infrastructure from the business.

Due to a lack of PaaS and SaaS interface and building block standards, we still have a long ways to go before we can effectively call either utilities, or truly serve the needs of interoperability and systems integration.

Of course this idea is not new.  Negroponte kicked off the idea in his great view of the future in the “Big Switch,” with a lot of great analogies about compute, network, and storage capacity as a modern day adaptation of the electrical grid.

We like to look at the analogy of roads (won’t look at water today, but the analogy still applies).  Roads are built using standards.  In the US the Department of Transportation establishes the need, and construction standards for Interstate Highways, and US highways.  The states establish standards and requirements for state roads, and county / local governments establish standards for everything else.

The roads are standard.  We know what to expect when driving on an Interstate Highway.  Whether it be bridge height, lane sizing, on / off ramps, or even rest stops – it is hard to be surprised when driving the Interstate Highway system.

However the highway system does not unnecessarily inhibit development of vehicles which use the highways – there are hundreds of different makes, models, and sizes of vehicles on the road, and all use the same basic infrastructure.

Getting back to cloud computing, to make our IaaS a true utility, we need to ensure interoperability and portability within the IaaS underlying technologies, and allow for true on-demand portability of the physical infrastructure, management systems, provisioning systems, and billing systems.  Just like with the electrical grid.  And standards much like the highway system, with the flexibility to support predictable, innovative ideas.

Once we have removed the burden of underlying physical IT infrastructure from our planning model, we can focus our energy on higher levels of utility, including PaaS and SaaS.

Enterprise Architecture frameworks, such as TOGAF, promote the use of Architecture Building Blocks (ABB) and Solution Building Blocks (SBB).  Where ABBs may define global, industry, and local standards, SBBs provide definition for solutions which are specific to a project, and do not normally have either standards or other reusable components to draw from.  However, development of SBBs should still acknowledge and have a design which will support either an existing  standard, or broader development of new standard interfaces in the future.

This includes the most important component of open, standard, and reusable interfaces (APIs) which support service-orientation, interoperability, and portability of data.  Which may also be considered characteristics of the future PaaS and SaaS utilities.  Or in more simple terms, edging closer to the death of proprietary data or physical interfaces and functionality.

Now a reminder – at this level we are still striving to create utilities which will ultimately reduce or eliminate our need for specialized IT.  Yes, there are exceptions where specific equipment interfaces are unique to a technology, such as rock crushers in the mining industry.  However, for example, we are still able to conduct agile business on a global scale with all our customers, competitors, suppliers, and vendors all using compatible email.

That is the objective, to make the underlying infrastructure, including much of PaaS and SaaS, standard, and serve he needs of business innovation, without the danger of being inhibited by proprietary and non-standard or compatible interfaces.

Build a business on innovative ideas, create competitive or unique selling points and products, focus energy on developing those innovations, and relieve yourselves of the burden resulting from carrying excessive and unproductive IT infrastructure below the business.

And then IT is a utility

Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

Nurturing the Marriage of Cloud Computing and SOAs

In 2009 we began consulting jobs with governments in developing countries with the primary objective to consolidate data centers across government ministries and agencies into centralized, high capacity and quality data centers.  At the time, nearly all individual ministry or agency data infrastructure was built into either small computers rooms or server closets with some added “brute force” air conditioning, no backup generators, no data back up, superficial security, and lots of other ailments.

CC-SOA The vision and strategy was that if we consolidated inefficient, end of life, and high risk IT infrastructure into a standardized and professionally managed facility, national information infrastructure would not only be more secure, but through standardization, volume purchasing agreements, some server virtualization, and development of broadband infrastructure most of the IT needs of government would be easily fulfilled.

Then of course cloud computing began to mature, and the underlying technologies of Infrastructure as a Service (IaaS) became feasible.  Now, not only were the governments able to decommission inefficient and high-risk IS environments, they would also be able to build virtual data centers  with levels of on-demand compute, storage, and network resources.  Basic data center replacement.

Even those remaining committed “server hugger” IT managers and fiercely independent governmental organizations cloud hardly argue the benefits of having access to disaster recovery storage capacity though the centralized data center.

As the years passed, and we entered 2014, not only did cloud computing mature as a business model, but senior management began to increase their awareness of various aspects of cloud computing, including the financial benefits, standardization of IT resources, the characteristics of cloud computing, and potential for Platform and Software as a Service (PaaS/SaaS) to improve both business agility and internal decision support systems.

At the same time, information and organizational architecture, governance, and service delivery frameworks such as TOGAF, COBIT, ITIL, and Risk Analysis training reinforced the value of both data and information within an organization, and the need for IT systems to support higher level architectures supporting decision support systems and market interactions (including Government to Government, Business, and Citizens for the public sector) .

2015 will bring cloud computing and architecture together at levels just becoming comprehensible to much of the business and IT world.  The open Group has a good first stab at building a standard for this marriage with their Service-Oriented Cloud Computing Infrastructure (SOCCI). According to the SOCCI standard,

“Infrastructure is a foundational element for enterprise architecture. Infrastructure has been  traditionally provisioned in a physical manner. With the evolution of virtualization technologies  and application of service-orientation to infrastructure, it can now be offered as a service.

Service-orientation principles originated in the business and application architecture arena. After  repeated, successful application of these principles to application architecture, IT has evolved to  extending these principles to the infrastructure.”

At first glance the SOCII standard appears to be a document which creates a mapping between enterprise architecture (TOGAF) and cloud computing.  At second glance the SOCCI standard really steps towards tightening the loose coupling of standard service-oriented architectures through use of cloud computing tools included with all service models (IaaS/PaaS/SaaS).

The result is an architectural vision which is easily capable of absorbing existing IT requirements, as well as incorporating emerging big data analytics models, interoperability, and enterprise architecture.

Since the early days of 2009 discussion topics with government and enterprise customers have shown a marked transition from simply justifying decommissioning of high risk data centers to how to manage data sharing, interoperability, or the potential for over standardization and other service delivery barriers which might inhibit innovation – or ability of business units to quickly respond to rapidly changing market opportunities.

2015 will be an exciting year for information and communications technologies.  For those of us in the consulting and training business, the new year is already shaping up to be the busiest we have seen.

It is Time to Get Serious about Architecting ICT

Just finished another ICT-related technical assistance visit with a developing country government. Even in mid-2014, I spend a large amount of time teaching basic principles of enterprise architecture, and the need for adding form and structure to ICT strategies.

Service-oriented architectures (SOA) have been around for quite a long time, with some references going back to the 1980s. ITIL, COBIT, TOGAF, and other ICT standards or recommendations have been around for quite a long time as well, with training and certifications part of nearly every professional development program.

So why is the idea of architecting ICT infrastructure still an abstract to so many in government and even private industry? It cannot be the lack of training opportunities, or publicly available reference materials. It cannot be the lack of technology, or the lack of consultants readily willing to assist in deploying EA, SOA, or interoperability within any organization or industry cluster.

During the past two years we have run several Interoperability Readiness Assessments within governments. The assessment initially takes the form of a survey, and is distributed to a sample of 100 or more participants, with positions ranging from administrative task-based workers, to Cxx or senior leaders within ministries and government agencies.

Questions range from basic ICT knowledge to data sharing, security, and decision support systems.

While the idea of information silos is well-documented and understood, it is still quite surprising to see “siloed” attitudes are still prevalent in modern organizations.  Take the following question:

Question on Information Sharing

This question did not refer to sharing data outside of the government, but rather within the government.  It indicates a high lack of trust when interacting with other government agencies, which will of course prevent any chance of developing a SOA or facilitating information sharing among other agencies.  The end result is a lower level of both integrity and value in national decision support capability.

The Impact of Technology and Standardization

Most governments are considering or implementing data center consolidation initiatives.  There are several good reasons for this, including:

  • Cost of real estate, power, staffing, maintenance, and support systems
  • Transition from CAPEX-based ICT infrastructure to OPEX-based
  • Potential for virtualization of server and storage resources
  • Standardized cloud computing resources

While all those justifications for data center consolidation are valid, the value potentially pales in comparison of the potential of more intelligent use of data across organizations, and even externally to outside agencies.  To get to this point, one senior government official stated:

“Government staff are not necessarily the most technically proficient.  This results in reliance on vendors for support, thought leadership, and in some cases contractual commitments.  Formal project management training and certification are typically not part of the capacity building of government employees.

Scientific approaches to project management, especially ones that lend themselves to institutionalization and adoption across different agencies will ensure a more time-bound and intelligent implementation of projects. Subsequently, overall knowledge and technical capabilities are low in government departments and agencies, and when employees do gain technical proficiency they will leave to join private industry.”

There is also an issue with a variety of international organizations going into developing countries or developing economies, and offering no or low cost single-use ICT infrastructure, such as for health-related agencies, which are not compatible with any other government owned or operated applications or data sets.

And of course the more this occurs, the more difficult it is for government organizations to enable interoperability or data sharing, and thus the idea of an architecture or data sharing become either impossible or extremely difficult to implement or accomplish.

The Road to EA, SOAs, and Decision Support

There are several actions to take on the road to meeting our ICT objectives.

  1. Include EA, service delivery (ITIL), governance (COBIT), and SOA training in all university and professional ICT education programs.  It is not all about writing code or configuring switches, we need to ensure a holistic understanding of ICT value in all ICT education, producing a higher level of qualified graduates entering the work force.
  2. Ensure government and private organizations develop or adopt standards or regulations which drive enterprise architecture, information exchange models, and SOAs as a basic requirement of ICT planning and operations.
  3. Ensure executive awareness and support, preferably through a formal position such as the Chief Information Officer (CIO).  Principles developed and published via the CIO must be adopted and governed by all organizations,
    Nobody expects large organizations, in particular government organizations, to change their cultures of information independence overnight.  This is a long term evolution as the world continues to better understand the value and extent of value within existing data sets, and begin creating new categories of data.  Big data, data analytics, and exploitation of both structured and unstructured data will empower those who are prepared, and leave those who are not prepared far behind.
    For a government, not having the ability to access, identify, share, analyze, and address data created across agencies will inhibit effective decision support, with potential impact on disaster response, security, economic growth, and overall national quality of life.
    If there is a call to action in this message, it is for governments to take a close look at how their national ICT policies, strategies, human capacity, and operations are meeting national objectives.  Prioritizing use of EA and supporting frameworks or standards will provide better guidance across government, and all steps taken within the framework will add value to the overall ICT capability.

Pacific-Tier Communications LLC provides consulting to governments and commercial organizations on topics related to data center consolidation, enterprise architecture, risk management, and cloud computing.

Asian Carrier’s Conference 2013 Kicks Off in Cebu

ACC 2013The 2013 ACC kicked off on Tuesday morning with an acknowledgement by Philippine Long Distance Telecommunications (PLDT) CEO Napolean L. Nazareno that “we’re going through a profound and painful transformation to digital technologies.” He continued to explain that in addition to making the move to a digital corporate culture and architecture that for traditional telcos to succeed they will need to “master new skills, including new partnership skills.”

That direction drives a line straight down the middle of attendees at the conference. Surprisingly, many companies attending and advertising their products still focus on “minutes termination,” and traditional voice-centric relationships with other carriers and “voice” wholesalers.

Philippe MilletMatthew Howett, Regulation and Policy Practice Leader for Ovum Research noted ”while fixed and mobile minutes are continuing to grow, traditional voice revenue is on the decline.” He backed the statement up with figures including “Over the Top/OTT” services, which are when a service provider sends all types of communications, including video, voice, and other connections, over an Internet protocol network – most commonly over the public Internet.

Howett informed the ACC’s plenary session attendees that Ovum Research believes up to US$52 billion will be lost in traditional voice revenues to OTT providers by 2016, and an additional US$32,6 billion to instant messaging providers in the same period.

The message was simple to traditional communications carriers – adapt or become irrelevant. National carriers may try to work with government regulators to try and adopt legal barriers to prevent the emergence of OTTs operating in that country, however that is only a temporary step to stem the flow of “technology-enabled” competition and retain revenues.

As noted by Nazareno, the carriers must wake up to the reality we are in a global technology refresh cycle and business visions, expectations, and construct business plans that will not only allow the company to survive, but also meet the needs of their users and national objectives.

Kevin Vachon, MEFMartin Geddes, owner of Martin Geddes Consulting, introduced the idea of “Task Substitution.’” Task Substitution occurs when an individual or organization is able to use a substitute technology or process to accomplish tasks that were previously only available from a single source. One example is the traditional telephone call. In the past you would dial a number, and the telephone company would go through a series of connections, switches, and processes that would both connect two end devices, as well as provide accounting for the call.

The telephone user now has many alternatives to the traditional phone call – all task substitutions. You can use Skype, WebEx, GoToMeeting, instant messaging – any one of a multitude of utilities allowing an individual or group to participate in one to one or many to many communications. When a strong list of alternative methods to complete a task exist, then the original method may become obsolete, or have to rapidly adapt to avoid being discarded by users.

A strong message, which made many attendees visibly uncomfortable.

Ivan Landen, Managing Director at Asia-Pacific Expereo, described the telecom revolution in terms all attendees could easily visualize. “Today around 80% of the world’s population have access to the electrical grid/s, while more than 85% of the population has access to Wireless”

Ivan Landen, ExpereoHe also provided an additional bit of information which did not surprise attendees, but also made some of the telecom representatives a bit uneasy. In a survey Geddes conducted he discovered that more than 1/2 of business executives polled admitted their Internet access was better at their homes than in their offices.” This information can be analyzed in several different ways, from having poor IT planning with the company, to poor UT capacity management within the communication provider, to the reality traffic on consumer networks is simply lower during the business day than during other time periods.

However the main message was “there is a huge opportunity for communication companies to fix business communications.”

The conference continues until Friday. Many more sessions, many more perimeter discussions, and a lot of space for the telecom community to come to grips with the reality “we need to come to grips with the digital world.”

%d bloggers like this: