PTC 2015 Wraps Up with Strong Messages on SDNs and Automation

Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.

Carrier SDNs SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.

While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.

Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.

Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards.  In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:

  • How to differentiate their service offering
  • How to differentiate their operations environment
  • How to ensure industry-acceptable delivery and provisioning time cycles
  • How to deal with legacy deployments

Alexander also emphasized that as an industry we need to get away from physical wiring when possible.   With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.

The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire).  Some examples of actual and required deployments included:

  • A bandwidth on-demand marketplace
  • Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
  • Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services

Robust discussions on standards also spawned debated.  With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards.  Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.

During the PTC 2015 session entitled  “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:

  • Open Contrail
  • Open Daylight
  • Open Stack
  • Open Flow
  • OPNFV
  • ONOS
  • OvS
  • Project Floodlight
  • Open Networking
  • and on and on….

For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service  marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.

For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.

The connection and integration of cloud services with telecom or network services is quite clear.  At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.

While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do.  In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”

Oh my…

Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

It is Time to Consider Wireless Mesh Networking in Our Disaster Recovery Plans

Wireless Mesh Networking (WMN) has been around for quite a few years.  However, not until recently, when protesters in Cairo and Hong Kong used utilities such as Firechat to bypass the mobile phone systems and communicate directly with each other, did mesh networking become well known.

Wireless Mesh Networking WMN establishes an ad hoc communications network using the WiFi (802.11/15/16) radios on their mobile phones and laptops to connect with each other, and extend the connectable portion of the network to any device with WMN software.  Some devices may act as clients, some as mesh routers, and some as gateways.  Of course there are more technical issues to fully understand with mesh networks, however the bottom line is if you have an Android, iOS, or software enabled laptop you can join, extend, and participate in a WMN.

In locations highly vulnerable to natural disasters, such as hurricane, tornado, earthquake, or wildfire, access to communications can most certainly mean the difference between surviving and not surviving.  However, during disasters, communications networks are likely to fail.

The same concept used to allow protesters in Cairo and Hong Kong to communicate outside of the mobile and fixed telephone networks could, and possibly should, have a role to play in responding to disasters.

An interesting use of this type of network was highlighted in a recent novel by Matthew Mather, entitled “Cyberstorm.”  Following a “Cyber” attack on the US Internet and connected infrastructures, much of the fixed communications infrastructure was rendered inoperable, and utilities depending on networks also fell under the impact.  An ad hoc WMN was built by some enterprising technicians, using the wireless radios available within most smart phones.  This allowed primarily messaging, however did allow citizens to communicate with each other – and the police, by interconnecting their smart phones into the mesh.

We have already embraced mobile phones, with SMS instant messaging, into many of our country’s emergency notification systems.  In California we can receive instant notifications from emergency services via SMS and Twitter, in addition to reverse 911.  This actually works very well, up to the point of a disaster.

WMN may provide a model for ensuring communications following a disaster.  As nearly every American now has a mobile phone, with a WiFi radio, the basic requirements for a mesh network are already in our hands.  The main barrier, today, with WMN is the distance limitations between participating access devices.  With luck WiFi antennas will continue to increase in power, reducing distance barriers, as each new generation is developed.

There are quite a few WMN clients available for smart phones, tablets, and WiFi-enabled devices today.  While many of these are used as instant messaging and social platforms today, just as with other social communications applications such as Twitter, the underlying technology can be used for many different uses, including of course disaster communications.

Again, the main limitation on using WMNs in disaster planning today is the limited number of participating nodes (devices with a WiFi radio), distance limitations with existing wireless radios and protocols, and the fact very few people are even aware of the concept of WMNs and potential deployments or uses.  The more participants in a WMN, the more robust is becomes, the better performance the WMN will support, and the better chance your voice will be heard during a disaster.

Here are a couple WMN Disaster Support ideas I’d like to either develop, or see others develop:

  • Much like the existing 911 network, a WMN standard could and should be developed for all mobile phone devices, tablets, and laptops with a wireless radio
  • Each mobile device should include an “App” for disaster communications
  • Cities should attempt to install WMN compatible routers and access points, particularly in areas at high risk for natural disasters, which could be expected to survive the disaster
  • Citizens in disaster-prone areas should be encouraged to add a solar charging device to their earthquake, wildfire, and  other disaster-readiness kits to allow battery charging following an anticipated utility power loss
  • Survivable mesh-to-Internet gateways should be the responsibility of city government, while allowing citizen or volunteer gateways (including ham radio) to facilitate communications out of the disaster area
  • Emergency applications should include the ability to easily submit disaster status reports, including photos and video, to either local, state, or FEMA Incident Management Centers

That is a start.

Take a look at Wireless Mesh Networks.  Wikipedia has a great high-level explanation, and  Google search yields hundreds of entries.  WMNs are nothing new, but as with the early days of the Internet, are not getting a lot of attention.  However maybe at sometime in the future a WMN could save your life.

Adopting Critical Thinking in Information Technology

The scenario is a data center, late on a Saturday evening.  A telecom distribution system fails, and operations staff are called in from their weekend to quickly find the problem and restore operations as quickly as possible.

Critical Thinking As time goes on,  many customers begin to call in, open trouble tickets, upset at systems outages and escalating customer disruptions.

The team spends hours trying to fix a rectifier providing DC power to a main telecommunications distribution switch, and start by replacing each systems component one-by-one hoping to find the guilty part.  The team grows very frustrated due to not only fatigue, but also their failure in being able to s0lve the problem.  After many hours the team finally realizes there is no issue with either the telecom switch, or rectifier supplying DC power to the switch.  What could the problem be?

Finally, after many hours of troubleshooting, chasing symptoms, and hit / miss component replacements,  an electrician discovers there is a panel circuit that has failed due to many years of misuse (for those electrical engineers it was actually a circuit that oxidized and shorted due to “over-amping” the circuit – without preventive maintenance or routine checks).

The incident highlighted a reality – the organization working on the problem had very little critical thinking or problem solving skills.  They chased each obvious symptom, but never really addressed or successfully identified the underlying problem.  Great technicians, poor critical thinkers.   And a true story.

While this incident was a data center-related trouble shooting fail, we frequently fail to use good critical thinking in not only trouble shooting, but also developing opportunities and solutions for our business users and customers.

A few years ago I took a break from the job and spent some time working on personal development.  In addition to collecting certifications in TOGAF, ITIL, and other aerchitecture-related subjects I added a couple of additional classes, including Kepner-Tregoe (K-T) and Kepner-Fourie (K-F) Critical Thinking and Problem Solving Courses.

Not bad schools of thought, and a good refresher course reminding me of those long since forgotten systems management skills learned in graduate school – heck, nearly 30 years ago.

Here is the problem: IT systems and business use of technologies have rapidly developed during the past 10 years, and that rate of change appears to be accelerating.  Processes and standards developed 10, 15, or 20 years ago are woefully inadequate to support much of our technology and business-related design, development, and operations.  Tacit knowledge, tacit skills, and gut feelings cannot be relied on to correctly identify and solve problems we encounter in our fast-paced IT world.

Keep in mind, this discussion is not only related to problem solving, but also works just as well when considering new product or solution development for new and emerging business opportunities or challenges.

Critical Thinking forces us to know what a problem (or opportunity) is, know and apply the differences between inductive and deductive reasoning, identify premises and conclusions, good and bad arguments, and acknowledge issue descriptions and explanations (Erlandson).

Critical Thinking “religions” such as Kepner-Fourie (K-F) provide a process and model for solving problems.  Not bad if you have the time to create and follow heavy processes, or even better can automate much of the process.  However even studying extensive system like K-T and K-F will continue to drive the need for establishing an appropriate system for responding to events.

Regardless of the approach you may consider, repeated exposure to critical thinking concepts and practice will force us to  intellectually step away from chasing symptoms or over-reliance on tacit knowledge (automatic thinking) when responding to problems and challenges.

For IT managers, think of it as an intellectual ITIL Continuous Improvement Cycle – we always need to exercise our brains and thought process.  Status quo, or relying on time-honored solutions to problems will probably not be sufficient to bring our IT organizations into the future.  We need to continue ensuring our assumptions are based on facts, and avoid undue influence – in particular by vendors, to ensure our stakeholders have confidence in our problem or solution development process, and we have a good awareness of business and technology transformations impacting our actions.

In addition to those courses and critical thinking approaches listed above, exposure and study of those or any of the following can only help ensure we continue to exercise and hone our critical thinking skills.

  • A3 Management
  • Toyota Kata
  • PDSA (Plan-Do-Adjust-Study)

And lots of other university or related courseware.  For myself, I keep my interest alive by reading an occasional eBook (Such as “How to Think Clearly, A Guide to Critical Thinking” by Doug Erlandson – great to read during long flights), and Youtube videos.

What do you “think?”

Nurturing the Marriage of Cloud Computing and SOAs

In 2009 we began consulting jobs with governments in developing countries with the primary objective to consolidate data centers across government ministries and agencies into centralized, high capacity and quality data centers.  At the time, nearly all individual ministry or agency data infrastructure was built into either small computers rooms or server closets with some added “brute force” air conditioning, no backup generators, no data back up, superficial security, and lots of other ailments.

CC-SOA The vision and strategy was that if we consolidated inefficient, end of life, and high risk IT infrastructure into a standardized and professionally managed facility, national information infrastructure would not only be more secure, but through standardization, volume purchasing agreements, some server virtualization, and development of broadband infrastructure most of the IT needs of government would be easily fulfilled.

Then of course cloud computing began to mature, and the underlying technologies of Infrastructure as a Service (IaaS) became feasible.  Now, not only were the governments able to decommission inefficient and high-risk IS environments, they would also be able to build virtual data centers  with levels of on-demand compute, storage, and network resources.  Basic data center replacement.

Even those remaining committed “server hugger” IT managers and fiercely independent governmental organizations cloud hardly argue the benefits of having access to disaster recovery storage capacity though the centralized data center.

As the years passed, and we entered 2014, not only did cloud computing mature as a business model, but senior management began to increase their awareness of various aspects of cloud computing, including the financial benefits, standardization of IT resources, the characteristics of cloud computing, and potential for Platform and Software as a Service (PaaS/SaaS) to improve both business agility and internal decision support systems.

At the same time, information and organizational architecture, governance, and service delivery frameworks such as TOGAF, COBIT, ITIL, and Risk Analysis training reinforced the value of both data and information within an organization, and the need for IT systems to support higher level architectures supporting decision support systems and market interactions (including Government to Government, Business, and Citizens for the public sector) .

2015 will bring cloud computing and architecture together at levels just becoming comprehensible to much of the business and IT world.  The open Group has a good first stab at building a standard for this marriage with their Service-Oriented Cloud Computing Infrastructure (SOCCI). According to the SOCCI standard,

“Infrastructure is a foundational element for enterprise architecture. Infrastructure has been  traditionally provisioned in a physical manner. With the evolution of virtualization technologies  and application of service-orientation to infrastructure, it can now be offered as a service.

Service-orientation principles originated in the business and application architecture arena. After  repeated, successful application of these principles to application architecture, IT has evolved to  extending these principles to the infrastructure.”

At first glance the SOCII standard appears to be a document which creates a mapping between enterprise architecture (TOGAF) and cloud computing.  At second glance the SOCCI standard really steps towards tightening the loose coupling of standard service-oriented architectures through use of cloud computing tools included with all service models (IaaS/PaaS/SaaS).

The result is an architectural vision which is easily capable of absorbing existing IT requirements, as well as incorporating emerging big data analytics models, interoperability, and enterprise architecture.

Since the early days of 2009 discussion topics with government and enterprise customers have shown a marked transition from simply justifying decommissioning of high risk data centers to how to manage data sharing, interoperability, or the potential for over standardization and other service delivery barriers which might inhibit innovation – or ability of business units to quickly respond to rapidly changing market opportunities.

2015 will be an exciting year for information and communications technologies.  For those of us in the consulting and training business, the new year is already shaping up to be the busiest we have seen.

Now that We Have Adopted IaaS…

Providing guidance or consulting to organizations on cloud computing topics can be really easy, or really tough.  In the past most of the initial engagement was dedicated to training and building awareness with your customer.  The next step was finding a high value, low risk application or service that could be moved to Infrastructure as a Service (IaaS) to solve an immediate problem, normally associated with disaster recovery or data backups.

Service Buss and DSS As the years have continued, dynamics changed.  On one hand, IT professionals and CIOs began to establish better knowledge of what virtualization, cloud computing, and outsourcing could do for their organization.  CFOs became aware of the financial potential of virtualization and cloud computing, and a healthy dialog between IT, operations, business units, and the CFO.

The “Internet Age” has also driven global competition down to the local level, forcing nearly all organizations to respond more rapidly to business opportunities.  If a business unit cannot rapidly respond to the opportunity, which may require product and service development, the opportunity can be lost far more quickly than in the past.

In the old days, procurement of IT resources could require a fairly lengthy cycle.  In the Internet Age, if an IT procurement cycle takes > 6 months, there is probably little chance to effectively meet the greatly shortened development cycle competitors in other continents – or across the street may be able to fulfill.

With IaaS the procurement cycle of IT resources can be within minutes, allowing business units to spend far more time developing products, services, and solutions, rather than dealing with the frustration of being powerless to respond to short window opportunities.  This is of course addressing the essential cloud characteristics of Rapid Elasticity and On-Demand Self-Service.

In addition to on-demand and elastic resources, IaaS has offered nearly all organizations the option of moving IT resources into either public or private cloud infrastructure.  This has the benefit of allowing data center decommissioning, and re-commissioning into a virtual environment.  The cost of operating data centers, maintaining data centers and IT equipment, and staffing data centers vs. outsourcing that infrastructure into a cloud is very interesting to CFOs, and a major justification for replacing physical data centers with virtual data centers.

The second dynamic, in addition to greater professional knowledge and awareness of cloud computing, is the fact we are starting to recruit cloud-aware employees graduating from universities and making their first steps into careers and workforce.  With these “cloud savvy” young people comes deep experience with interoperable data, social media, big data, data analytics, and an intellectual separation between access devices and underlying IT infrastructure.

The Next Step in Cloud Evolution

OK, so we all are generally aware of the components of IaaS, Platform as a Service (PaaS), and Software as a Service (SaaS).  Let’s have a quick review of some standout features supported or enabled by cloud:

  • Increased standardization of applications
  • Increased standardization of data bases
  • Federation of security systems (Authentication and Authorization)
  • Service busses
  • Development of other common applications (GIS, collaboration, etc.)
  • Transparency of underlying hardware

Now let’s consider the need for better, real-time, accurate decision support systems (DSS).  Within any organization the value of a DSS is dependent on data integrity, data access (open data within/without an organization), and single-source data.

Frameworks for developing an effective DSS are certainly available, whether it is TOGAF, the US Federal Enterprise Architecture Framework (FEAF), interoperability frameworks, and service-oriented architectures (SOA).  All are fully compatible with the tools made available within the basic cloud service delivery models (IaaS, PaaS, SaaS).

The Open Group (same organization which developed TOGAF) has responded with their model of a Cloud Computing Service Oriented Infrastructure (SOCCI) Framework.  The SOCCI is identified as the marriage of a Service-Oriented Infrastructure and cloud computing.  The SOCCI also incorporates aspects of TOGAF into the framework, which may drive more credibility into a SOCCI architectural development process.

The expected result of this effort is for existing organizations dealing with departmental “silos” of IT infrastructure, data, and applications, a level of interoperability and DSS development based on service-orientation, using a well-designed underlying cloud infrastructure.  This data sharing can be extended beyond the (virtual) firewall to others in an organization’s trading or governmental community, resulting in  DSS which will become closer and closer to an architecture vision based on the true value of data produced, or made available to an organization.

While we most certainly need IaaS, and the value of moving to virtual data centers is justified by itself, we will not truly benefit from the potential of cloud computing until we understand the potential of data produced and available to decision makers.

The opportunity will need a broad spectrum of contributors and participants with awareness and training in disciplines ranging from technical capabilities, to enterprise architecture, to service delivery, and governance acceptable to a cloud-enabled IT world.

For those who are eagerly consuming training and knowledge in the above skills and knowledge, the future is anything but cloudy.  For those who believe in status quo, let’s hope you are close to pension and retirement, as this is your future.

ICT Modernization Planning

ICT ModernizationThe current technology refresh cycle presents many opportunities, and challenges to both organizations and governments.  The potential of service-oriented architectures, interoperability, collaboration, and continuity of operations is an attractive outcome of technologies and business models available today.  The challenges are more related to business processes and human factors, both of which require organizational transformations to take best advantage of the collaborative environments enabled through use of cloud computing and access to broadband communications.

Gaining the most benefit from planning an interoperable environment for governments and organizations may be facilitated through use of business tools such as cloud computing.  Cloud computing and underlying technologies may create an operational environment supporting many strategic objectives being considered within government and private sector organizations.

Reaching target architectures and capabilities is not a single action, and will require a clear understanding of current “as-is” baseline capabilities, target requirements, the gaps or capabilities need to reach the target, and establishing a clear transitional plan to bring the organization from a starting “as-is” baseline to the target goal.

To most effectively reach that goal requires an understanding of the various contributing components within the transformational ecosystem.  In addition, planners must keep in mind the goal is not implementation of technologies, but rather consideration of technologies as needed to facilitate business and operations process visions and goals.

Interoperability and Enterprise Architecture

Information technology, particularly communications-enabled technology has enhanced business process, education, and the quality of life for millions around the world.  However, traditionally ICT has created silos of information which is rarely integrated or interoperable with other data systems or sources.

As the science of enterprise architecture development and modeling, service-oriented architectures, and interoperability frameworks continue to force the issue of data integration and reuse, ICT developers are looking to reinforce open standards allowing publication of external interfaces and application programming interfaces.

Cloud computing, a rapidly maturing framework for virtualization, standardized data, application, and interface structure technologies, offers a wealth of tools to support development of both integrated and interoperable ICT  resources within organizations, as well as among their trading, shared, or collaborative workflow community.

The Institute for Enterprise Architecture Development defines enterprise architecture (EA) as a “complete expression of the enterprise; a master plan which acts as a collaboration force between aspects of business planning such as goals, visions, strategies and governance principles; aspects of business operations such as business terms, organization structures, processes and data; aspects of automation such as information systems and databases; and the enabling technological infrastructure of the business such as computers, operating systems and networks”

ICT, including utilities such as cloud computing, should focus on supporting the holistic objectives of organizations implementing an EA.  Non-interoperable or shared data will generally have less value than reusable data, and will greatly increase systems reliability and data integrity.

Business Continuity and Disaster Recovery (BCDR)

Recent surveys of governments around the world indicate in most cases limited or no disaster management or continuity of operations planning.  The risk of losing critical national data resources due to natural or man-made disasters is high, and the ability for most governments maintain government and citizen services during a disaster is limited based on the amount of time (recovery time objective/RTO) required to restart government services, as well as the point of data restoral (recovery point objective /RPO).

In existing ICT environments, particularly those with organizational and data resource silos,  RTOs and RPOs can be extended to near indefinite if both a data backup plan, as well as systems and service restoral resource capacity is not present.  This is particularly acute if the processing environment includes legacy mainframe computer applications which do not have a mirrored recovery capacity available upon failure or loss of service due to disaster.

Cloud computing can provide a standards-based environment that fully supports near zero RTO/RPO requirements.  With the current limitation of cloud computing being based on Intel-compatible architectures, nearly any existing application or data source can be migrated into a virtual resource pool.   Once within the cloud computing Infrastructure as a Service (IaaS) environment, setting up distributed processing or backup capacity is relatively uncomplicated, assuming the environment has adequate broadband access to the end user and between processing facilities.

Cloud computing-enabled BCDR also opens opportunities for developing either PPPs, or considering the potential of outsourcing into public or commercially operated cloud computing compute, storage, and communications infrastructure.  Again, the main limitation being the requirement for portability between systems.

Transformation Readiness

ICT modernization will drive change within all organizations.  Transformational readiness is not a matter of technology, but a combination of factors including rapidly changing business models, the need for many-to-many real-time communications, flattening of organizational structures, and the continued entry of technology and communications savvy employees into the workforce.

The potential of outsourcing utility compute, storage, application, and communications will eliminate the need for much physical infrastructure, such as redundant or obsolete data centers and server closets.  Roles will change based on the expected shift from physical data centers and ICT support hardware to virtual models based on subscriptions and catalogs of reusable application and process artifacts.

A business model for accomplishing ICT modernization includes cloud computing, which relies on technologies such as server and storage resource virtualization, adding operational characteristics including on-demand resource provisioning to reduce the time needed to procure ICT resources needed to respond to emerging operational  or other business opportunities.

IT management and service operations move from a workstation environment to a user interface driven by SaaS.  The skills needed to drive ICT within the organization will need to change, becoming closer to the business, while reducing the need to manage complex individual workstations.

IT organizations will need to change, as organizations may elect to outsource most or all of their underlying physical data center resources to a cloud service provider, either in a public or private environment.  This could eliminate the need for some positions, while driving new staffing requirements in skills related to cloud resource provisioning, management, and development.

Business unit managers may be able to take advantage of other aspects of cloud computing, including access to on-demand compute, storage, and applications development resources.  This may increase their ability to quickly respond to rapidly changing market conditions and other emerging opportunities.   Business unit managers, product developers, and sales teams will need to become familiar with their new ICT support tools.  All positions from project managers to sales support will need to quickly acquire skills necessary to take advantage of these new tools.

The Role of Cloud Computing

Cloud computing is a business representation of a large number of underlying technologies.  Including virtualization, development environment, and hosted applications, cloud computing provides a framework for developing standardized service models, deployment models, and service delivery characteristics.

The US National Institute of Standards and Technology (NIST) provides a definition of cloud computing accepted throughout the ICT industry.

“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.“

While organizations face decisions related to implementing challenges related to developing enterprise architectures and interoperability, cloud computing continues to rapidly develop as an environment with a rich set of compute, communication, development, standardization, and collaboration tools needed to meet organizational objectives.

Data security, including privacy, is different within a cloud computing environment, as the potential for data sharing is expanded among both internal and potentially external agencies.  Security concerns are expanded when questions of infrastructure multi-tenancy, network access to hosted applications (Software as a Service / SaaS), and governance of authentication and authorization raise questions on end user trust of the cloud provider.

A move to cloud computing is often associated with data center consolidation initiatives within both governments and large organizations.  Cloud delivery models, including Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) support the development of virtual data centers.

While it is clear long term target architectures for most organizations will be an environment with a single data system, in the short term it may be more important to decommission high risk server closets and unmanaged servers into a centralized, well-managed data center environment offering on-demand access to compute, storage, and network resources – as well as BCDR options.

Even at the most basic level of considering IaaS and PaaS as a replacement environment to physical infrastructure, the benefits to the organization may become quickly apparent.  If the organization establishes a “cloud first” policy to force consolidation of inefficient or high risk ICT resources, and that environment further aligns the organization through the use of standardized IT components, the ultimate goal of reaching interoperability or some level of data integration will become much easier, and in fact a natural evolution.

Nearly all major ICT-related hardware and software companies are re-engineering their product development to either drive cloud computing, or be cloud-aware.  Microsoft has released their Office 365 suite of online and hosted environments, as has Google with both PaaS and SaaS tools such as the Google Apps Engine and Google Docs.

The benefits of organizations considering a move to hosted environments, such as MS 365, are based on access to a rich set of applications and resources available on-demand, using a subscription model – rather than licensing model, offering a high level of standardization to developers and applications.

Users comfortable with standard office automation and productivity tools will find the same features in a SaaS environment, while still being relieved of individual software license costs, application maintenance, or potential loss of resources due to equipment failure or theft.  Hosted applications also allow a persistent state, collaborative real-time environment for multi-users requiring access to documents or projects.  Document management and single source data available for reuse by applications and other users, reporting, and performance management becomes routine, reducing the potential and threat of data corruption.

The shortfalls, particularly for governments, is that using a large commercial cloud infrastructure and service provider such as Microsoft  may require physically storing data in location outside of their home country, as well as forcing data into a multi-tenant environment which may not meet security requirements for organizations.

Cloud computing offers an additional major feature at the SaaS level that will benefit nearly all organizations transitioning to a mobile workforce.  SaaS by definition is platform independent.  Users access SaaS applications and underlying data via any device offering a network connection, and allowing access to an Internet-connected address through a browser.    The actual intelligence in an application is at the server or virtual server, and the user device is simply a dumb terminal displaying a portal, access point, or the results of a query or application executed through a command at the user screen.

Cloud computing continues to develop as a framework and toolset for meeting business objectives.  Cloud computing is well-suited to respond to rapidly changing business and organizational needs, as the characteristics of on-demand access to infrastructure resources, rapid elasticity, or the ability to provision and de-provision resources as needed to meet processing and storage demand, and organization’s ability to measure cloud computing resource use for internal and external accounting mark a major change in how an organization budgets ICT.

As cloud computing matures, each organization entering a technology refresh cycle must ask the question “are we in the technology business, or should we concentrate our efforts and budget in efforts directly supporting realizing objectives?”  If the answer is the latter, then any organization should evaluate outsourcing their ICT infrastructure to an internal or commercial cloud service provider.

It should be noted that today most cloud computing IaaS service platforms will not support migration of mainframe applications, such as those written for a RISC processor.  Those application require redevelopment to operate within an Intel-compatible processing environment.

Broadband Factor

Cloud computing components are currently implemented over an Internet Protocol network.  Users accessing SaaS application will need to have network access to connect with applications and data.  Depending on the amount of graphics information transmitted from the host to an individual user access terminal, poor bandwidth or lack of broadband could result in an unsatisfactory experience.

In addition, BCDR requires the transfer of potentially large amounts of data between primary and backup locations. Depending on the data parsing plan, whether mirroring data, partial backups, full backups, or live load balancing, data transfer between sites could be restricted if sufficient bandwidth is not available between sites.

Cloud computing is dependent on broadband as a means of connecting users to resources, and data transfer between sites.  Any organization considering implementing cloud computing outside of an organization local area network will need to fully understand what shortfalls or limitations may result in the cloud implementation not meeting objectives.

The Service-Oriented Cloud Computing Infrastructure (SOCCI)

Governments and other organizations are entering a technology refresh cycle based on existing ICT hardware and software infrastructure hitting the end of life.  In addition, as the world aggressively continues to break down national and technical borders, the need for organizations to reconsider the creation, use, and management of data supporting both mission critical business processes, as well as decision support systems will drive change.

Given the clear direction industry is taking to embrace cloud computing services, as well as the awareness existing siloed data structures within many organizations would better serve the organization in a service-oriented  framework, it makes sense to consider an integrated approach.

A SOCCI considers both, adding reference models and frameworks which will also add enterprise architecture models such as TOGAF to ultimately provide a broad, mature framework to support business managers and IT managers in their technology and business refresh planning process.

SOCCIs promote the use of architectural building blocks, publication of external interfaces for each application or data source developed, single source data, reuse of data and standardized application building block, as well as development and use of enterprise service buses to promote further integration and interoperability of data.

A SOCCI will look at elements of cloud computing, such as virtualized and on-demand compute/storage resources, and access to broadband communications – including security, encryption, switching, routing, and access as a utility.  The utility is always available to the organization for use and exploitation.  Higher level cloud components including PaaS and SaaS add value, in addition to higher level entry points to develop the ICT tools needed to meet the overall enterprise architecture and service-orientation needed to meet organizational needs.

According to the Open Group a SOCCI framework provides the foundation for connecting a service-oriented infrastructure with the utility of cloud computing.  As enterprise architecture and interoperability frameworks continue to gain in value and importance to organizations, this framework will provide additional leverage to make best use of available ICT tools.

The Bottom Line on ICT Modernization

The Internet Has reached nearly every point in the world, providing a global community functioning within an always available, real-time communications infrastructure.  University and primary school graduates are entering the workforce with social media, SaaS, collaboration, and location transparent peer communities diffused in their tacit knowledge and experience.

This environment has greatly flattened any leverage formerly developed countries, or large monopoly companies have enjoyed during the past several technology and market cycles.

An organization based on non-interoperable or standardized data, and no BCDR protection will certainly risk losing a competitive edge in a world being created by technology and data aware challengers.

Given the urgency organizations face to address data security, continuity of operations, agility to respond to market conditions, and operational costs associated with traditional ICT infrastructure, many are looking to emerging technology frameworks such as cloud computing to provide a model for planning solutions to those challenges.

Cloud computing and enterprise architecture frameworks provide guidance and a set of tools to assist organizations in providing structure, and infrastructure needed to accomplish ICT modernization objectives.

Data Center Consolidation and Adopting Cloud Computing in 2013

Throughout 2012 large organizations and governments around the world continued to struggle with the idea of consolidating inefficient data centers, server closets, and individual “rogue” servers scattered around their enterprise or government agencies.  Issues dealt with the cost of operating data centers, disaster management of information technology resources, and of course human factors centered on control, power, or retention of jobs in a rapidly evolving IT industry.

Cloud computing and virtualization continue to have an impact on all consolidation discussions, not only from the standpoint of providing a much better model for managing physical assets, but also in the potential cloud offers to solve disaster recovery shortfalls, improve standardization, and encourage or enable development of service-oriented architectures.

Our involvement in projects ranging from local, state, and national government levels in both the United States and other countries indicates a consistent need for answering the following concerns:

  • Existing IT infrastructure, including both IT and facility, is reaching the end of its operational life
  • Collaboration requirements between internal and external users are expanding quickly, driving an architectural need for interoperability
  • Decision support systems require access to both raw data, and “big data/archival data”

We would like to see an effort within the IT community to move in the following directions:

  1. Real effort at decommissioning and eliminating inefficient data centers
  2. All data and applications should be fit into an enterprise architecture framework – regardless of the size of organization or data
  3. Aggressive development of standards supporting interoperability, portability, and reuse of objects and data

Regardless of the very public failures experienced by cloud service providers over the past year, the reality is cloud computing as an IT architecture and model is gaining traction, and is not likely to go away any time soon.  As with any emerging service or technology, cloud services will continue to develop and mature, reducing the impact and frequency of failures.

Future Data CentersWhy would an organization continue to buy individual high powered workstations, individual software licenses, and device-bound storage when the same application can be delivered to a simple display, or wide variety of displays, with standardized web-enabled cloud (SaaS) applications that store mission critical data images on a secure storage system at a secure site?  Why not facilitate the transition from CAPEX to OPEX, license to subscription, infrastructure to product and service development?

In reality, unless an organization is in the hardware or software development business, there is very little technical justification for building and managing a data center.  This includes secure facilities supporting military or other sensitive sites.

The cost of building and maintaining a data center, compared with either outsourcing into a commercial colocation site – or virtualizing data, applications, and network access requirements has gained the attention of CFOs and CEOs, requiring IT managers to more explicitly justify the cost of building internal infrastructure vs. outsourcing.  This is quickly becoming a very difficult task.

Money spent on a data center infrastructure is lost to the organization.  The cost of labor is high, the cost of energy, space, and maintenance is high.  Mooney that could be better applied to product and service development, customer service capacity, or other revenue and customer-facing activities.

The Bandwidth Factor

The one major limitation the IT community will need to overcome as data center consolidation continues and cloud services become the ‘norm, is bandwidth.  Applications, such as streaming video, unified communications, and data intensive applications will need more bandwidth.  The telecom companies are making progress, having deployed 100gbps backbone capacity in many markets.  However this capacity will need to continue growing quickly to meet the needs of organizations needing to access data and applications stored or hosted within a virtual or cloud computing environment.

Consider a national government’s IT requirements.  If the government, like most, are based within a metro area.  The agencies and departments consolidate their individual data centers and server closets into a central or reduced number of facilities.   Government interoperability frameworks begin to make small steps allowing cross-agency data sharing, and individual users need access to a variety of applications and data sources needed to fulfill their decision support requirements.

For example, a GIS (Geospatial/Geographic Information System) with multiple demographic or other overlays.  Individual users will need to display data that may be drawn from several data sources, through GIS applications, and display a large amount of complex data on individual display screens.  Without broadband access between both the user and application, as well as application and data sources, the result will be a very poor user experience.

Another example is using the capabilities of video conferencing, desktop sharing, and interactive persistent-state application sharing.  Without adequate bandwidth this is simply not possible.

Revisiting the “4th Utility” for 2013

The final vision on the 2013 “wishlist” is that we, as an IT industry, continue to acknowledge the need for developing the 4th Utility.  This is the idea that broadband communications, processing capacity (including SaaS applications), and storage is the right of all citizens.  Much like the first three utilities, roads, water, and electricity, the 4th Utility must be a basic part of all discussions related to national, state, or local infrastructure discussions.  As we move into the next millennium, Internet-enabled, or something like Internet-enabled communications will be an essential part of all our lives.

The 4th Utility requires high capacity fiber optic infrastructure and broadband wireless be delivered to any location within the country which supports a community or individual connected to a community.   We’ll have to [pay a fee to access the utility (same as other utilities), but it is our right and obligation to deliver the utility.

2013 will be a lot of fun for us in the IT industry.  Cloud computing is going to impact everybody – one way or the other.  Individual data centers will continue to close.  Service-oriented architectures, enterprise architecture, process modeling, and design efficiency will drive a lot of innovation.   – We’ll lose some players, gain players, and and we’ll be in a better position at the end of 2013 than today.

Gartner Data Center Conference Looks Into Open Source Clouds and Data Backup

LV-2Day two of the Gartner Data Center Conference in Las Vegas continued reinforcing old topics, appearing at times to be either enlist attendees in contributing to Gartner research, or simply providing conference content directed to promoting conference sponsors.

For example, sessions “To the Point:  When Open Meets Cloud” and “Backup/Recovery: Backing Up the Future” included a series of audience surveys.  Those surveys were apparently the same as presented, in the same sessions, for several years.  Thus the speaker immediately referenced this year’s results vs. results from the same survey questions from the past two years.  This would lead a casual attendee to believe nothing radically new is being presented in the above topics, and the attendees are generally contributing to further trend analysis research that will eventually show up in a commercial Gartner Research Note.

Gartner analyst and speaker on the topic of “When Open Meets Clouds,” Aneel Lakhani, did make a couple useful, if not obvious points in his presentation.

  • We cannot secure complete freedom from vendors, regardless of how much you adopt open source
  • Open source can actually be more expensive than commercial products
  • Interoperability is easy to say, but a heck of a lot more complicated to implement
  • Enterprise users have a very low threshold for “test” environments (sorry DevOps guys)
  • If your organization has the time and staff, test, test, and test a bit more to ensure your open source product will perform as expected or designed

However analyst Dave Russell, speaker on the topic of “Backup/Recovery” was a bit more cut and paste in his approach.  Lots of questions to match against last year’s conference, and a strong emphasis on using tape as a continuing, if not growing media for disaster recovery.

Problem with this presentation was the discussion centered on backing up data – very little on business continuity.  In fact, in one slide he referenced a recovery point objective (RPO) of one day for backups.   What organization operating in a global market, in Internet time, can possibly design for a one day RPO?

In addition, there was no discussion on the need for compatible hardware in a disaster recovery site that would allow immediate or rapid restart of applications.  Having data on tape is fine.  Having mainframe archival data is fine.  But without a business continuity capability, it is likely any organization will suffer significant damage in their ability to function in their marketplace.  Very few organizations today can absorb an extended global presence outage or marketplace outage.

The conference continues until Thursday and we will look for more, positive approaches, to data center and cloud computing.

Gartner Data Center Conference Yields Few Surprises

Gartner’s 2012 Data Center Conference in Las Vegas is noted for  not yielding any major surprise.  While having an uncanny number of attendees (*the stats are not available, however it is clear they are having a very good conference), most of the sessions appear to be simply reaffirming what everybody really knows already, serving to reinforce the reality data center consolidation, cloud computing, big data, and the move to an interoperable framework will be part of everybody’s life within a few years.

Childs at Gartner ConferenceGartner analyst Ray Paquet started the morning by drawing a line at the real value of server hardware in cloud computing.  Paquet stressed that cloud adopters should avoid integrated hardware solutions based on blade servers, which carry a high margin, and focus their CAPEX on cheaper “skinless” servers.  Paquet emphasized that integrated solutions are a “waste of money.”

Cameron Haight, another Gartner analyst, fired a volley at the process and framework world, with a comparison of the value DevOps brings versus ITIL.  Describing ITIL as a cumbersome burden to organizational agility, DevOps is a culture-changer that allows small groups to quickly respond to challenges.  Haight emphasized the frequently stressful relationship between development organizations and operations organizations, where operations demands stability and quality, and development needs freedom to move projects forward, sometimes without the comfort of baking code to the standards preferred by operations – and required by frameworks such as ITIL.

Haight’s most direct slide described De Ops as being “ITIL minus CRAP.”  Of course most of his supporting slides for moving to DevOps looked eerily like an ITIL process….

Other sessions attended (by the author) included “Shaping Private Clouds,” a WIPRO product demonstration, and a data center introduction by Raging Wire.  All valuable introductions for those who are considering making a major change in their internal IT deployments, but nothing cutting edge or radical.

The Raging Wire data center discussion did raise some questions on the overall vulnerability of large box data centers.  While it is certainly possible to build a data center up to any standard needed to fulfill a specific need, the large data center clusters in locations such as Northern Virginia are beginning to appear very vulnerable to either natural, human, or equipment failure disruptions.  In addition to fulfilling data center tier classification models as presented by the Uptime Institute, it is clear we are producing critical national infrastructure which if disrupted could cause significant damage to the US economy or even social order.

Eventually, much like the communications infrastructure in the US, data centers will need to come under the observation or review of a national agency such as Homeland Security.  While nobody wants a government officer in the data center, protection of national infrastructure is a consideration we probably will not be able to avoid for long.

Raging Wire also noted that some colocation customers, particularly social media companies, are hitting up to 8kW per cabinet.  Also scary if true, and in extended deployments.  This could result in serious operational problems if cooling systems were disrupted, as the heat generated in those cabinets will quickly become extreme.  Would also be interesting if companies like Raging Wire and other colocation companies considered developing a real time CFD monitor for their data center floors allowing better monitoring and predictability than simple zone monitoring solutions.

The best presentation of the day came at the end, “Big Data is Coming to Your Data Center.”  Gartner’s Sheila Childs brought color and enthusiasm to a topic many consider, well, boring.  Childs was able to bring the value, power, and future of big data into a human consumable format that kept the audience in their seats until the end of session at 6 p.m. in the late afternoon.

Childs hit on concepts such as “dark data” within organizations, the value of big data in decision support systems (DSS), and the need for developing and recruiting skilled staff who can actually write or build the systems needed to fully exploit the value of big data.  We cannot argue that point, and can only hope our education system is able to focus on producing graduates with the basic skills needed to fulfill that requirement.

%d bloggers like this: