PTC 2015 Wraps Up with Strong Messages on SDNs and Automation

Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.

Carrier SDNs SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.

While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.

Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.

Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards.  In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:

  • How to differentiate their service offering
  • How to differentiate their operations environment
  • How to ensure industry-acceptable delivery and provisioning time cycles
  • How to deal with legacy deployments

Alexander also emphasized that as an industry we need to get away from physical wiring when possible.   With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.

The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire).  Some examples of actual and required deployments included:

  • A bandwidth on-demand marketplace
  • Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
  • Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services

Robust discussions on standards also spawned debated.  With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards.  Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.

During the PTC 2015 session entitled  “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:

  • Open Contrail
  • Open Daylight
  • Open Stack
  • Open Flow
  • OPNFV
  • ONOS
  • OvS
  • Project Floodlight
  • Open Networking
  • and on and on….

For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service  marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.

For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.

The connection and integration of cloud services with telecom or network services is quite clear.  At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.

While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do.  In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”

Oh my…

PTC 2015 Focuses on Submarine Cables and SDNs

PTC 2015 In an informal survey of words used during seminars and discussions, two main themes are emerging at the Pacific Telecommunications Council’s 2015 annual conference.  The first, as expected, is development of more submarine cable capacity both within the Pacific, as well as to end points in ANZ, Asia, and North America.  The second, software defined networking (SDN), as envisioned could quickly begin to re-engineer the gateway and carrier hotel interconnection business.

New cable development, including Arctic Fiber, Trident, SEA-US, and APX-E have sparked a lot of interest.  One discussion at Sunday morning’s Submarine Cable Workshop highlighted the need for Asian (and other regions) need to find ways to bypass the United States, not just for performance issues, but also to avoid US government agencies from intercepting and potentially exploiting data hitting US networks and data systems.

The bottom line with all submarine cable discussions is the need for more, and more, and more cable capacity.  Applications using international communications capacity, notably video, are consuming at rates which are driving fear the cable operators won’t be able to keep up with capacity demands.

However perhaps the most interesting, and frankly surprising development is with SDNs in the meet me room (MMR).  Products such as PacNet’s PEN (PacNet Enabled Network) are finally putting reality into on-demand, self-service circuit provisioning, and soon cloud computing capacity provisioning within the MMR.  Demonstrations showed how a network, or user, can provisioning from 1Mbps to 10Gbps point to point within a minute.

In the past on demand provisioning of interconnections was limited to Internet Exchange Points.  Fiber cross connects, VLANs, and point to point Ethernet connections.  Now, as carrier hotels and MMRs acknowledge the need for rapid provisioning of elastic (rapid addition and deletion of bandwidth or capacity) resources, the physical cross connect and IXP peering tools will not be adequate for market demands in the future.

SDN models, such as PacNet’s PEN, are a very innovative step towards this vision.  The underlying physical interconnection infrastructure simply becomes a software abstraction for end users (including carriers and networks) allowing circuit provisioning in a matter of minutes, rather than days.

The main requirement for full deployment is to “sell” carriers and networks on the concept, as key success factors will revolve around the network effect of participant communities.  Simply, the more connecting and participating networks within the SDN “community,” the more value the SDN MMR brings to a facility or market.

A great start to PTC 2015.  More PTC 2015 “sidebars” on Tuesday.

OSS Development for the Modern Data Center

Modern Data Centers are very complex environments.  Data center operators must have visibility into a wide range of integrated data bases, applications, and performance indicators to effectively understand and manage their operations and activities.

While each data center is different, all Data Centers share some common systems and common characteristics, including:

  • Facility inventories
  • Provisioning and customer fulfillment processes
  • Maintenance activities (including computerized maintenance management systems <CMMS>)
  • Monitoring
  • Customer management (including CRM, order management, etc.)
  • Trouble management
  • Customer portals
  • Security Systems (physical access entry/control and logical systems management)
  • Billing and Accounting Systems
  • Service usage records (power, bandwidth, remote hands, etc.)
  • Decision support system and performance management integration
  • Standards for data and applications
  • Staffing and activities-based management
  • Scheduling /calendar
  • etc…

Unfortunately, in many cases, the above systems are either done manually, have no standards, and had no automation or integration interconnecting individual back office components.  This also includes many communication companies and telecommunications carriers which previously either adhered, or claimed to adhere to Bellcore data and operations standards.

In some cases, the lack of integration is due to many mergers and acquisitions of companies which have unique, or non standard back office systems.  The result is difficulty in cross provisioning, billing, integrated customer management systems, and accounting – the day to day operations of a data center.

Modern data centers must have a high level of automation.  In particular, if a data center operator owns multiple facilities, it becomes very difficult to have a common look and feel or high level of integration allowing the company to offer a standardized product to their markets and customers.

Operational support systems or OSS, traditionally have four main components which include:

  • Support for process automation
  • Collection and storage for a wide variety of operational data
  • The use of standardized data structures and applications
  • And supporting technologies

With most commercial or public colocation and Data Centers customers and tenants organizations represent many different industries, products, and services.  Some large colocation centers may have several hundred individual customers.  Other data centers may have larger customers such as cloud service providers, content delivery networks, and other hosting companies.  While single large customers may be few, their internal hosted or virtual customers may also be at the scale of hundreds, or even thousands of individual customers.

To effectively support their customers Data Centers must have comprehensive OSS capabilities.  Given the large number of processes, data sources, and user requirements, the OSS should be designed and developed using a standard architecture and framework which will ensure OSS integration and interoperability.

OSS Components We have conducted numerous Interoperability Readiness surveys with both governments and private sector (commercial) data center operators during the past five years.  In more than 80% of surveys processes such as inventory management have been built within simple spreadsheets.  Provisioning of inventory items was normally a manual process conducted via e-mail or in some cases paper forms.

Provisioning, a manual process, resulted in some cases of double booked or double sold inventory items, as well as inefficient orders for adding additional customer-facing inventory or build out of additional data center space.

The problem often further compounded into additional problems such as missing customer billing cycles, accounting shortfalls, and management or monitoring system errors.

The new data center, including virtual data centers within cloud service providers, must develop better OSS tools and systems to accommodate the rapidly changing need for elasticity and agility in ICT systems.  This includes having as single window for all required items within the OSS.

Preparing an OSS architecture, based on a service-oriented architecture (SOA), should include use of ICT-friendly frameworks and guidance such as TOGAF and/or ITIL to ensure all visions and designs fully acknowledge and embrace the needs of each organization’s business owners and customers, and follow a comprehensive and structured development process to ensure those objectives are delivered.

Use of standard databases, APIs, service busses, security, and establishing a high level of governance to ensure a “standards and interoperability first” policy for all data center IT will allow all systems to communicate, share, reuse, and ultimately provide automated, single source data resources into all data center, management, accounting, and customer activities.

Any manual transfer of data between offices, applications, or systems must be prevented, preferring to integrate inventory, data collections and records, processes, and performance management indicators into a fully integrated and interoperable environment.  A basic rule of thought might be that if a human being has touched data, then the data likely has been either corrupted or its integrity may be brought into question.

Looking ahead to the next generation of data center services, stepping a bit higher up the customer service maturity continuum requires much higher levels of internal process and customer process automation.

Similar to NIST’s definition of cloud computing, stating the essential characteristics of cloud computing include “self-service provisioning,” “rapid elasticity,” ”measured services,” in addition to resource pooling and broadband access, it can be assumed that data center users of the future will need to order and fulfill services such as network interconnections, power, virtual space (or physical space), and other services through self-service, or on-demand ordering.

The OSS must strive to meet the following objectives:

  • Standardization
  • Interoperability
  • Reusable components and APIs
  • Data sharing

To accomplish this will require nearly all above mentioned characteristics of the OSS to have inventories in databases (not spreadsheets), process automation, and standards in data structure, APIs, and application interoperability.

And as the ultimate key success factor, management DSS will finally have potential for development of true dashboard for performance management, data analytics, and additional real-time tools for making effective organizational decisions.

You Want Money for a Data Center Buildout?

Yield to Cloud A couple years ago I attended several “fast pitch” competitions and events for entrepreneurs in Southern California, all designed to give startups a chance to “pitch” their ideas in about 60 seconds to a panel of representatives from the local investment community.  Similar to television’s “Shark Tank,” most of the ideas pitches were harshly critiqued, with the real intent of assisting participating entrepreneurs in developing a better story for approaching investors and markets.

While very few of the pitches received a strong, positive response, I recall one young guy who really set the panel back a step in awe.  The product was related to biotech, and the panel provided a very strong, positive response to the pitch.

Wishing to dig a bit deeper, one of the panel members asked the guy how much money he was looking for in an investment, and how he’d use the money.

“$5 million he responded,” with a resounding wave of nods from the panel.  “I’d use around $3 million for staffing, getting the office started, and product development.”  Another round of positive expressions.  “And then we’d spend around $2 million setting up in a data center with servers, telecoms, and storage systems.”

This time the panel looked as if they’d just taken a crisp slap to the face.  After a moment of collection, the panel spokesman launched into a dress down of the entrepreneur stating “I really like the product, and think you vision is solid.  However, with a greater then 95% chance of your company going bust within the first year, I have no desire to be stuck with $2 million worth of obsolete computer hardware, and potentially contract liabilities once you shut down your data center.  You’ve got to use your head and look at going to Amazon for your data center capacity and forget this data center idea.”

Now it was the entire audience’s turn to take a pause.

In the past IT managers really placed buying and controlling their own hardware, in their own facility, as a high priority – with no room for compromise.  For perceptions of security, a desire for personal control, or simply a concern that outsourcing would limit their own career potential, sever closets and small data centers were a common characteristic of most small offices.

At some point a need to have proximity to Internet or communication exchange points, or simple limitations on local facility capacity started forcing a migration of enterprise data centers into commercial colocation.  For the most part, IT managers still owned and controlled any hardware outsourced into the colocation facility, and most agreed that in general colocation facilities offered higher uptime, fewer service disruptions, and good performance, in particular for eCommerce sites.

Now we are at a new IT architecture crossroads.  Is there really any good reason for a startup, medium, or even large enterprise to continue operating their own data center, or even their own hardware within a colocation facility?  Certainly if the average CFO or business unit manager had their choice, the local data center would be decommissioned and shut down as quickly as possible.  The CAPEX investment, carrying hardware on the books for years of depreciation, lack of business agility, and dangers of business continuity and disaster recovery costs force the question of “why don’t we just rent IT capacity from a cloud service provider?”

Many still question the security of public clouds, many still question the compliance issues related to outsourcing, and many still simply do not want to give up their “soon-to-be-redundant” data center jobs.

Of course it is clear most large cloud computing companies have much better resources available to manage security than a small company, and have made great advances in compliance certifications (mostly due to the US government acknowledging the role of cloud computing and changing regulations to accommodate those changes).  If we look at the US Government’s FedRAMP certification program as an example, security, compliance, and management controls are now a standard – open for all organizations to study and adopt as appropriate.

So we get back to the original question, what would justify a company in continuing to develop data centers, when a virtual data center (as the first small step in adopting a cloud computing architecture) will provide better flexibility, agility, security, performance, and lower cost than operating a local of colocated IT physical infrastructure?  Sure, exceptions exist, including some specialized interfaces on hardware to support mining, health care, or other very specialized activities.  However if you re not in the computer or switch manufacturing business – can you really continue justifying CAPEX expenditures on IT?

IT is quickly becoming a utility.  As a business we do not plan to build roads, build water distribution, or build our own power generation plants.  Compute, telecom, and storage resources are becoming a utility, and IT managers (and data center / colocation companies) need to do a comprehensive review of their business and strategy, and find a way to exploit this technology reality, rather than allow it to pass us by.

Putting Enterprise Architecture Principles to Work

This week brought another great consulting gig, working with old friends and respected colleagues.  The challenge driving the consultation was brainstorming a new service for their company, and how best to get it into operation.

image The new service vision was pretty good.  The service would fill a hole, or shortfall in the industry which would better enable their customers to compete in markets both in the US and abroad.  However the process of planning and delivering this service, well, simply did not exist.

The team’s sense of urgency to deliver the service was high, based on a perception if they did not move quickly, then they would suffer an opportunity loss while competitors moved quickly to fill the service need themselves.

While it may have been easy to “jump on the bandwagon” and share the team’s enthusiasm, they lacked several critical components of delivering a new service, which included:

  • No specific product or service definition
  • No, even high level, market analysis or survey
  • No cost analysis or revenue projection
  • No risk analysis
  • No high level implementation plan or schedule

“We have great ideas from vendors, and are going to try and put together a quick pilot test as quickly as possible.  We are trying to gather a few of our customers to participate right now” stated one of the team.

At that point, reluctantly, I had to put on the brakes.  While not making any attempt to dampen the team’s enthusiasm, to promote a successful service launch I forced them to consider additional requirements, such as:

  • The need to build a business case
  • The need for integration of the service into existing back office systems, such as inventory, book-to-bank, OSS, management and monitoring, finance and billing, executive dashboards (KPIs, service performance, etc.)
  • Staffing and training requirements
  • Options of in-sourcing, outsourcing, or partnering to deliver the service
  • Developing RFPs (even simple RFPs) to help evaluate vendor options
  • and a few other major items

“That just sounds like too much work.  If we need to go through all that, we’ll never deliver the service.  Better to just work with a couple vendors and get it on the street.”

I should note the service would touch many, many people in the target industry, which is very tech-centric.  Success or failure of the service could have a major impact on the success or failure of many in the industry.

Being a card-carrying member of the enterprise architecture cult, and a proponent of other IT-related frameworks such as ITIL, COBIT, Open FAIR, and other business modeling, there are certainly bound to be conflicts between following a very structured approach to building business services, and the need for agile creativity and innovation.

In this case, asking the team to indulge me for a few minutes while I mapped out a simple, structured approach to developing and delivering the envisioned service.  By using simplified version of the TOGAF Architecture Development Method (ADM), and adding a few lines related to standards and service development methodology, such as the vision –> AS-IS –> gap analysis –> solutions development model, it did not take long for the team to reconsider their aggressive approach.

When preparing a chart of timelines using the “TOGAF Light,” or EA framework, the timelines were oddly similar to the aggressive approach.  The main difference being at the end of the EA approach the service not only followed a very logical, disciplined, measurable, governable, and flexible service.

Sounds a bit utopian, but in reality we were able to get to the service delivery with a better product, without sacrificing any innovation, agility, or market urgency.

This is the future of IT.  As we continue to move away from the frenzy of service deliveries of the Internet Age, and begin focusing on the business nature, including role IT plays in critical global infrastructures, the disciplines of following product and service development and delivery will continue to gain importance.

Business Drives Transition to IT as a Utility

Is there a point where business can safely assume they have hit the limit of what traditional IT organizations have to offer?  In an Internet and data driven world, does IT simply lack the agility and depth needed to fulfill business requirements and need for innovation?

Parts of cloud computing have chimed a loud and painful wake up call for many IT managers.  Even at the most simple level, Infrastructure as a Service (IaaS), it might be fair to say this is simply a utility to accelerate data center imagedecommissioning, and the process of physically decoupling underlying compute, storage, and network infrastructure from the business.

Due to a lack of PaaS and SaaS interface and building block standards, we still have a long ways to go before we can effectively call either utilities, or truly serve the needs of interoperability and systems integration.

Of course this idea is not new.  Negroponte kicked off the idea in his great view of the future in the “Big Switch,” with a lot of great analogies about compute, network, and storage capacity as a modern day adaptation of the electrical grid.

We like to look at the analogy of roads (won’t look at water today, but the analogy still applies).  Roads are built using standards.  In the US the Department of Transportation establishes the need, and construction standards for Interstate Highways, and US highways.  The states establish standards and requirements for state roads, and county / local governments establish standards for everything else.

The roads are standard.  We know what to expect when driving on an Interstate Highway.  Whether it be bridge height, lane sizing, on / off ramps, or even rest stops – it is hard to be surprised when driving the Interstate Highway system.

However the highway system does not unnecessarily inhibit development of vehicles which use the highways – there are hundreds of different makes, models, and sizes of vehicles on the road, and all use the same basic infrastructure.

Getting back to cloud computing, to make our IaaS a true utility, we need to ensure interoperability and portability within the IaaS underlying technologies, and allow for true on-demand portability of the physical infrastructure, management systems, provisioning systems, and billing systems.  Just like with the electrical grid.  And standards much like the highway system, with the flexibility to support predictable, innovative ideas.

Once we have removed the burden of underlying physical IT infrastructure from our planning model, we can focus our energy on higher levels of utility, including PaaS and SaaS.

Enterprise Architecture frameworks, such as TOGAF, promote the use of Architecture Building Blocks (ABB) and Solution Building Blocks (SBB).  Where ABBs may define global, industry, and local standards, SBBs provide definition for solutions which are specific to a project, and do not normally have either standards or other reusable components to draw from.  However, development of SBBs should still acknowledge and have a design which will support either an existing  standard, or broader development of new standard interfaces in the future.

This includes the most important component of open, standard, and reusable interfaces (APIs) which support service-orientation, interoperability, and portability of data.  Which may also be considered characteristics of the future PaaS and SaaS utilities.  Or in more simple terms, edging closer to the death of proprietary data or physical interfaces and functionality.

Now a reminder – at this level we are still striving to create utilities which will ultimately reduce or eliminate our need for specialized IT.  Yes, there are exceptions where specific equipment interfaces are unique to a technology, such as rock crushers in the mining industry.  However, for example, we are still able to conduct agile business on a global scale with all our customers, competitors, suppliers, and vendors all using compatible email.

That is the objective, to make the underlying infrastructure, including much of PaaS and SaaS, standard, and serve he needs of business innovation, without the danger of being inhibited by proprietary and non-standard or compatible interfaces.

Build a business on innovative ideas, create competitive or unique selling points and products, focus energy on developing those innovations, and relieve yourselves of the burden resulting from carrying excessive and unproductive IT infrastructure below the business.

And then IT is a utility

Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

It is Time to Consider Wireless Mesh Networking in Our Disaster Recovery Plans

Wireless Mesh Networking (WMN) has been around for quite a few years.  However, not until recently, when protesters in Cairo and Hong Kong used utilities such as Firechat to bypass the mobile phone systems and communicate directly with each other, did mesh networking become well known.

Wireless Mesh Networking WMN establishes an ad hoc communications network using the WiFi (802.11/15/16) radios on their mobile phones and laptops to connect with each other, and extend the connectable portion of the network to any device with WMN software.  Some devices may act as clients, some as mesh routers, and some as gateways.  Of course there are more technical issues to fully understand with mesh networks, however the bottom line is if you have an Android, iOS, or software enabled laptop you can join, extend, and participate in a WMN.

In locations highly vulnerable to natural disasters, such as hurricane, tornado, earthquake, or wildfire, access to communications can most certainly mean the difference between surviving and not surviving.  However, during disasters, communications networks are likely to fail.

The same concept used to allow protesters in Cairo and Hong Kong to communicate outside of the mobile and fixed telephone networks could, and possibly should, have a role to play in responding to disasters.

An interesting use of this type of network was highlighted in a recent novel by Matthew Mather, entitled “Cyberstorm.”  Following a “Cyber” attack on the US Internet and connected infrastructures, much of the fixed communications infrastructure was rendered inoperable, and utilities depending on networks also fell under the impact.  An ad hoc WMN was built by some enterprising technicians, using the wireless radios available within most smart phones.  This allowed primarily messaging, however did allow citizens to communicate with each other – and the police, by interconnecting their smart phones into the mesh.

We have already embraced mobile phones, with SMS instant messaging, into many of our country’s emergency notification systems.  In California we can receive instant notifications from emergency services via SMS and Twitter, in addition to reverse 911.  This actually works very well, up to the point of a disaster.

WMN may provide a model for ensuring communications following a disaster.  As nearly every American now has a mobile phone, with a WiFi radio, the basic requirements for a mesh network are already in our hands.  The main barrier, today, with WMN is the distance limitations between participating access devices.  With luck WiFi antennas will continue to increase in power, reducing distance barriers, as each new generation is developed.

There are quite a few WMN clients available for smart phones, tablets, and WiFi-enabled devices today.  While many of these are used as instant messaging and social platforms today, just as with other social communications applications such as Twitter, the underlying technology can be used for many different uses, including of course disaster communications.

Again, the main limitation on using WMNs in disaster planning today is the limited number of participating nodes (devices with a WiFi radio), distance limitations with existing wireless radios and protocols, and the fact very few people are even aware of the concept of WMNs and potential deployments or uses.  The more participants in a WMN, the more robust is becomes, the better performance the WMN will support, and the better chance your voice will be heard during a disaster.

Here are a couple WMN Disaster Support ideas I’d like to either develop, or see others develop:

  • Much like the existing 911 network, a WMN standard could and should be developed for all mobile phone devices, tablets, and laptops with a wireless radio
  • Each mobile device should include an “App” for disaster communications
  • Cities should attempt to install WMN compatible routers and access points, particularly in areas at high risk for natural disasters, which could be expected to survive the disaster
  • Citizens in disaster-prone areas should be encouraged to add a solar charging device to their earthquake, wildfire, and  other disaster-readiness kits to allow battery charging following an anticipated utility power loss
  • Survivable mesh-to-Internet gateways should be the responsibility of city government, while allowing citizen or volunteer gateways (including ham radio) to facilitate communications out of the disaster area
  • Emergency applications should include the ability to easily submit disaster status reports, including photos and video, to either local, state, or FEMA Incident Management Centers

That is a start.

Take a look at Wireless Mesh Networks.  Wikipedia has a great high-level explanation, and  Google search yields hundreds of entries.  WMNs are nothing new, but as with the early days of the Internet, are not getting a lot of attention.  However maybe at sometime in the future a WMN could save your life.

Adopting Critical Thinking in Information Technology

The scenario is a data center, late on a Saturday evening.  A telecom distribution system fails, and operations staff are called in from their weekend to quickly find the problem and restore operations as quickly as possible.

Critical Thinking As time goes on,  many customers begin to call in, open trouble tickets, upset at systems outages and escalating customer disruptions.

The team spends hours trying to fix a rectifier providing DC power to a main telecommunications distribution switch, and start by replacing each systems component one-by-one hoping to find the guilty part.  The team grows very frustrated due to not only fatigue, but also their failure in being able to s0lve the problem.  After many hours the team finally realizes there is no issue with either the telecom switch, or rectifier supplying DC power to the switch.  What could the problem be?

Finally, after many hours of troubleshooting, chasing symptoms, and hit / miss component replacements,  an electrician discovers there is a panel circuit that has failed due to many years of misuse (for those electrical engineers it was actually a circuit that oxidized and shorted due to “over-amping” the circuit – without preventive maintenance or routine checks).

The incident highlighted a reality – the organization working on the problem had very little critical thinking or problem solving skills.  They chased each obvious symptom, but never really addressed or successfully identified the underlying problem.  Great technicians, poor critical thinkers.   And a true story.

While this incident was a data center-related trouble shooting fail, we frequently fail to use good critical thinking in not only trouble shooting, but also developing opportunities and solutions for our business users and customers.

A few years ago I took a break from the job and spent some time working on personal development.  In addition to collecting certifications in TOGAF, ITIL, and other aerchitecture-related subjects I added a couple of additional classes, including Kepner-Tregoe (K-T) and Kepner-Fourie (K-F) Critical Thinking and Problem Solving Courses.

Not bad schools of thought, and a good refresher course reminding me of those long since forgotten systems management skills learned in graduate school – heck, nearly 30 years ago.

Here is the problem: IT systems and business use of technologies have rapidly developed during the past 10 years, and that rate of change appears to be accelerating.  Processes and standards developed 10, 15, or 20 years ago are woefully inadequate to support much of our technology and business-related design, development, and operations.  Tacit knowledge, tacit skills, and gut feelings cannot be relied on to correctly identify and solve problems we encounter in our fast-paced IT world.

Keep in mind, this discussion is not only related to problem solving, but also works just as well when considering new product or solution development for new and emerging business opportunities or challenges.

Critical Thinking forces us to know what a problem (or opportunity) is, know and apply the differences between inductive and deductive reasoning, identify premises and conclusions, good and bad arguments, and acknowledge issue descriptions and explanations (Erlandson).

Critical Thinking “religions” such as Kepner-Fourie (K-F) provide a process and model for solving problems.  Not bad if you have the time to create and follow heavy processes, or even better can automate much of the process.  However even studying extensive system like K-T and K-F will continue to drive the need for establishing an appropriate system for responding to events.

Regardless of the approach you may consider, repeated exposure to critical thinking concepts and practice will force us to  intellectually step away from chasing symptoms or over-reliance on tacit knowledge (automatic thinking) when responding to problems and challenges.

For IT managers, think of it as an intellectual ITIL Continuous Improvement Cycle – we always need to exercise our brains and thought process.  Status quo, or relying on time-honored solutions to problems will probably not be sufficient to bring our IT organizations into the future.  We need to continue ensuring our assumptions are based on facts, and avoid undue influence – in particular by vendors, to ensure our stakeholders have confidence in our problem or solution development process, and we have a good awareness of business and technology transformations impacting our actions.

In addition to those courses and critical thinking approaches listed above, exposure and study of those or any of the following can only help ensure we continue to exercise and hone our critical thinking skills.

  • A3 Management
  • Toyota Kata
  • PDSA (Plan-Do-Adjust-Study)

And lots of other university or related courseware.  For myself, I keep my interest alive by reading an occasional eBook (Such as “How to Think Clearly, A Guide to Critical Thinking” by Doug Erlandson – great to read during long flights), and Youtube videos.

What do you “think?”

Developing a New “Service-Centric IT Value Chain”

imageAs IT professionals we have been overwhelmed with different standards for each component of architecture, service delivery, governance, security, and operations.  Not only does IT need to ensure technical training and certification, but it is also desired to pursue certifications in ITIL, TOGAF, COBIT, PMP, and a variety of other frameworks – at a high cost in both time and money.

Wouldn’t it be nice to have an IT framework or reference architecture which brings all the important components of each standard or recommendation into a single model which focuses on the most important aspect of each existing model?

The Open Group is well-known for publishing TOGAF (The Open Group Architecture Framework), in addition to a variety of other standards and frameworks related to Service-Oriented Architectures (SOA), security, risk, and cloud computing.  In the past few years, recognizing the impact of broadband, cloud computing, SOAs, and need for a holistic enterprise architecture approach to business and IT, publishing many common-sense, but powerful recommendations such as:

  • TOGAF 9.1
  • Open FAIR (Risk Analysis and Assessment)
  • SOCCI (Service-Oriented Cloud Computing Infrastructure)
  • Cloud Computing
  • Open Enterprise Security Architecture
  • Document Interchange Reference Model (for interoperability)
  • and others.

The open Group’s latest project intended to streamline and focus IT systems development is called the “IT4IT” Reference Architecture.  While still in the development, or “snapshot” phase, IT4IT is surprisingly easy to read, understand, and most importantly logical.

“The IT Value Chain and IT4IT Reference Architecture represent the IT service lifecycle in a new and powerful way. They provide the missing link between industry standard best practice guides and the technology framework and tools that power the service management ecosystem. The IT Value Chain and IT4IT Reference Architecture are a new foundation on which to base your IT operating model. Together, they deliver a welcome blueprint for the CIO to accelerate IT’s transition to becoming a service broker to the business.” (Open Group’s IT4IT Reference Architecture, v 1.3)

The IT4IT Reference Architecture acknowledges changes in both technology and business resulting from the incredible impact Internet and automation have had on both enterprise and government use of information and data.  However the document also makes a compelling case that IT systems, theory, and operations have not kept up with either existing IT support technologies, nor the business visions and objectives IT is meant to serve.

IT4IT’s development team is a large, global collaborative effort including vendors, enterprise, telecommunications, academia, and consulting companies.  This helps drive a vendor or technology neutral framework, focusing more on running IT as a business, rather than conforming to a single vendor’s product or service.  Eventually, like all developing standards, IT4IT may force vendors and systems developers to provide a solid model and framework for developing business solutions, which will support greater interoperability and data sharing between both internal and external organizations.

The visions and objectives for IT4IT include two major components, which are the IT Value Chain and IT4IT Reference Architecture.  Within the IT4IT Core are sections providing guidance, including:

  • IT4IT Abstractions and Class Structures
  • The Strategy to Portfolio Value Stream
  • The Requirement to Deploy Value Stream
  • The Request to Fulfill Value Stream
  • The Detect to Correct Value Stream

Each of the above main sections have borrowed from, or further developed ideas and activities from within ITIL, COBIT, and  TOGAF, but have taken a giant leap including cloud computing, SOAs, and enterprise architecture into the product.

As the IT4IT Reference Architecture is completed, and supporting roadmaps developed, the IT4IT concept will no doubt find a large legion of supporters, as many, if not most, businesses and IT professionals find the certification and knowledge path for ITIL, COBIT, TOGAF, and other supporting frameworks either too expensive, or too time consuming (both in training and implementation).

Take a look at IT4IT at the Open Group’s website, and let us know what you think.  Too light?  Not needed?  A great idea or concept?  Let us know.

Follow

Get every new post delivered to your Inbox.

Join 292 other followers

%d bloggers like this: