It is Time to Get Serious about Architecting ICT

Just finished another ICT-related technical assistance visit with a developing country government. Even in mid-2014, I spend a large amount of time teaching basic principles of enterprise architecture, and the need for adding form and structure to ICT strategies.

Service-oriented architectures (SOA) have been around for quite a long time, with some references going back to the 1980s. ITIL, COBIT, TOGAF, and other ICT standards or recommendations have been around for quite a long time as well, with training and certifications part of nearly every professional development program.

So why is the idea of architecting ICT infrastructure still an abstract to so many in government and even private industry? It cannot be the lack of training opportunities, or publicly available reference materials. It cannot be the lack of technology, or the lack of consultants readily willing to assist in deploying EA, SOA, or interoperability within any organization or industry cluster.

During the past two years we have run several Interoperability Readiness Assessments within governments. The assessment initially takes the form of a survey, and is distributed to a sample of 100 or more participants, with positions ranging from administrative task-based workers, to Cxx or senior leaders within ministries and government agencies.

Questions range from basic ICT knowledge to data sharing, security, and decision support systems.

While the idea of information silos is well-documented and understood, it is still quite surprising to see “siloed” attitudes are still prevalent in modern organizations.  Take the following question:

Question on Information Sharing

This question did not refer to sharing data outside of the government, but rather within the government.  It indicates a high lack of trust when interacting with other government agencies, which will of course prevent any chance of developing a SOA or facilitating information sharing among other agencies.  The end result is a lower level of both integrity and value in national decision support capability.

The Impact of Technology and Standardization

Most governments are considering or implementing data center consolidation initiatives.  There are several good reasons for this, including:

  • Cost of real estate, power, staffing, maintenance, and support systems
  • Transition from CAPEX-based ICT infrastructure to OPEX-based
  • Potential for virtualization of server and storage resources
  • Standardized cloud computing resources

While all those justifications for data center consolidation are valid, the value potentially pales in comparison of the potential of more intelligent use of data across organizations, and even externally to outside agencies.  To get to this point, one senior government official stated:

“Government staff are not necessarily the most technically proficient.  This results in reliance on vendors for support, thought leadership, and in some cases contractual commitments.  Formal project management training and certification are typically not part of the capacity building of government employees.

Scientific approaches to project management, especially ones that lend themselves to institutionalization and adoption across different agencies will ensure a more time-bound and intelligent implementation of projects. Subsequently, overall knowledge and technical capabilities are low in government departments and agencies, and when employees do gain technical proficiency they will leave to join private industry.”

There is also an issue with a variety of international organizations going into developing countries or developing economies, and offering no or low cost single-use ICT infrastructure, such as for health-related agencies, which are not compatible with any other government owned or operated applications or data sets.

And of course the more this occurs, the more difficult it is for government organizations to enable interoperability or data sharing, and thus the idea of an architecture or data sharing become either impossible or extremely difficult to implement or accomplish.

The Road to EA, SOAs, and Decision Support

There are several actions to take on the road to meeting our ICT objectives.

  1. Include EA, service delivery (ITIL), governance (COBIT), and SOA training in all university and professional ICT education programs.  It is not all about writing code or configuring switches, we need to ensure a holistic understanding of ICT value in all ICT education, producing a higher level of qualified graduates entering the work force.
  2. Ensure government and private organizations develop or adopt standards or regulations which drive enterprise architecture, information exchange models, and SOAs as a basic requirement of ICT planning and operations.
  3. Ensure executive awareness and support, preferably through a formal position such as the Chief Information Officer (CIO).  Principles developed and published via the CIO must be adopted and governed by all organizations,
    Nobody expects large organizations, in particular government organizations, to change their cultures of information independence overnight.  This is a long term evolution as the world continues to better understand the value and extent of value within existing data sets, and begin creating new categories of data.  Big data, data analytics, and exploitation of both structured and unstructured data will empower those who are prepared, and leave those who are not prepared far behind.
    For a government, not having the ability to access, identify, share, analyze, and address data created across agencies will inhibit effective decision support, with potential impact on disaster response, security, economic growth, and overall national quality of life.
    If there is a call to action in this message, it is for governments to take a close look at how their national ICT policies, strategies, human capacity, and operations are meeting national objectives.  Prioritizing use of EA and supporting frameworks or standards will provide better guidance across government, and all steps taken within the framework will add value to the overall ICT capability.

Pacific-Tier Communications LLC provides consulting to governments and commercial organizations on topics related to data center consolidation, enterprise architecture, risk management, and cloud computing.

Connecting at the Westin Building Exchange in Seattle

Seattle Washington - Home of WBXInternational telecommunication carriers all share one thing in common – the need to connect with other carriers and networks.  We want to make calls to China, a video conference in Moldova, send an email message for delivery within 5 seconds to Australia – all possible with our current state of global communications.  Magic?  Of course not.  While an abstract to most, the reality is telecommunications physical infrastructure extends to nearly every corner of the world, and communications carriers bring this global infrastructure together at  a small number of facilities strategically placed around the world informally called “carrier hotels.”

Pacific-Tier had the opportunity to visit the Westin Building Exchange (commonly known as the WBX), one of the world’s busiest carrier hotels, in early August.   Located in the heart of Seattle’s bustling business district, the WBX stands tall at 34 stories.  The building also acts as a crossroads of the Northwest US long distance terrestrial cable infrastructure, and is adjacent to trans-Pacific submarine cable landing points.

The world’s telecommunications community needs carrier hotels to interconnect their physical and value added networks, and the WBX is doing a great job in facilitating both physical interconnections between their more than 150 carrier tenants.

“We understand the needs of our carrier and network tenants” explained Mike Rushing,   Business Development Manager at the Westin Building.  “In the Internet economy things happen at the speed of light.  Carriers at the WBX are under constant pressure to deliver services to their customers, and we simply want to make this part of the process (facilitating interconnections) as easy as possible for them.”

Main Distribution Frame at WBXThe WBX community is not limited to carriers.  The community has evolved to support Internet Service Providers, Content Delivery Networks (CDNs), cloud computing companies, academic and research networks, enterprise customers, public colocation and data center operators, the NorthWest GigaPOP, and even the Seattle Internet Exchange Point (SIX), one of the largest Internet exchanges in the world.

“Westin is a large community system,” continued Rushing.  “As new carriers establish a point of presence within the building, and begin connecting to others within the tenant and accessible community, then the value of the WBX community just continues to grow.”

The core of the WBX is the 19th floor meet-me-room (MMR).  The MMR is a large, neutral, interconnection point for networks and carriers representing both US and international companies.  For example, if China Telecom needs to connect a customer’s headquarters in Beijing to an office in Boise served by AT&T, the actual circuit must transfer at a physical demarcation point from China Telecom  to AT&T.  There is a good chance that physical connection will occur at the WBX.

According to Kyle Peters, General Manager of the Westin Building, “we are supporting a wide range of international and US communications providers and carriers.  We fully understand the role our facility plays in supporting not only our customer’s business requirements, but also the role we play in supporting global communications infrastructure.”

You would be correct in assuming the WBX plays an important role in that critical US and global communications infrastructure.  Thus you would further expect the WBX to be constructed and operated in a manner providing a high level of confidence to the community their installed systems will not fail.

Lance Forgey, Director of Operations at the WBX, manages not only the MMR, but also the massive mechanical (air conditioning) and electrical distribution systems within the building.  A former submarine engineer, Forgey runs the Westin Building much like he operated critical systems within Navy ships.  Assisted by an experienced team of former US Navy engineers and US Marines, the facility presents an image of security, order, cleanliness, and operational attention to detail.

“Our operations and facility staff bring the discipline of many years in the military, adding innovation needed to keep up with our customer’s industries” said Forgey.  “Once you have developed a culture of no compromise on quality, then it is easy keep things running.”

That is very apparent when you walk through the site – everything is in its place, it is remarkably clean, and it is very obvious the entire site is the product of a well-prepared plan.

WBX GeneratorsOne area which stands out at the WBX is the cooling and electrical distribution infrastructure.  With space within adjacent external parking structures and additional areas outside of the building most heavy equipment is located outside of the building, providing an additional layer of physical security, and allowing the WBX to recover as much space within the building as possible for customer use.

“Power is not an issue for us”  noted Forgey.  “It is a limiting factor for much of our industry, however at the Westin Building we have plenty, and can add additional power anytime the need arises.”

That is another attraction for the WBX versus some of the other carrier hotels on the West Coast of the US.  Power in Washington State averages around $0.04/kWH, while power in California may be nearly three times as expensive.

“In addition to having all the interconnection benefits similar operations have on the West Coast, the WBX can also significantly lower operating costs for tenants” added Rushing.  As the cost of power is a major factor in data center operations, reducing the cost of operations through a significant reduction in the cost of power is a big issue.

The final area carrier hotels need to address is the ever changing nature of communications, including interconnections between members of the WBX community.  Nothing is static, and the WBX team is constantly communicating with tenants, evaluating changes in supporting technologies, and looking for ways to ensure they have the tools available to meet their rapidly changing environments.

Cloud computing, software-defined networking, carrier Ethernet – all  topics which require frequent communication with tenants to gain insight into their visions, concerns, and plans.  The WBX staff showed great interest in cooperating with their tenants to ensure the WBX will not impede development or implementation of new  technologies, as well as attempt to stay ahead of their customer deployments.

“If a customer comes to us and tells us they need a new support infrastructure or framework with very little lead time, then we may not be able to respond quickly enough to meet their requirements” concluded Rushing.  “Much better to keep an open dialog with customers and become part of their team.”

Pacific-Tier has visited, and evaluated dozens of data centers during the past four years.  Some have been very good, some have been very bad.  Some have gone over the edge in data center deployments, chasing the “grail” of a Tier IV data center certification, while some have been little more than a server closet.

The Westin Building (WBX)The Westin Building / WBX is unique in the industry.  Owned by both Clise Properties of Seattle and Digital Realty Trust,  the Westin Building brings the best of both the real estate world and data centers into a single operation.  The quality of mechanical and electrical infrastructure, the people maintaining the infrastructure, and the vision of the company give a visitor an impression that not only is the WBX a world-class facility, but also that all staff and management know their business, enjoy the business, and put their customers on top as their highest priority.

As Clise Properties owns much of the surrounding land, the WBX has plenty of opportunity to grow as the business expands and changes.  “We know cloud computing companies will need to locate close to the interconnection points, so we better be prepared to deliver additional high-density infrastructure as their needs arise” said Peters.  And in fact Clise has already started planning for their second colocation building.  This building, like its predecessor, will be fully interconnected with the Westin Building, including virtualizing the MMR distribution frames in each building into a single cross interconnection environment.

Westin WBX LogoWBX offers the global telecom industry an alternative to other carrier hotels in Los Angeles and San Francisco. One shortfall in the global telecom industry are the “single threaded” links many have with other carriers in the global community.  California has the majority of North America / Asia carrier interconnections today, but all note California is one of the world’s higher risk options for building critical infrastructure, with the reality it is more a matter of “when” than “if” a catastrophic event such as an earthquake occurs which could seriously disrupt international communications passing through one of the region’s MMRs.

The telecom industry needs to have the option of alternate paths of communications and interconnection points.  While the WBX stands tall on its own as a carrier hotel and interconnection site, it is also the best alternative and diverse landing point for trans-Pacific submarine cable capacity – and subsequent interconnections.

The WBX offers a wide range of customer services, including:

  • Engineering support
  • 24×7 Remote hands
  • Fast turn around for interconnections
  • Colocation
  • Power circuit monitoring and management
  • Private suites and lease space for larger companies
  • 24×7 security monitoring and access control

Check out the Westin Building and WBX the next time you are in Seattle, or if you want to learn more about the telecom community revolving and evolving in the Seattle area.  Contact Mike Rushing at mrushing@westinbldg.com for more information.

 

Why IT Guys Need to Learn TOGAF

ByeBye-Telephones You are No Longer RequiredJust finished another frustrating day of consulting with an organization that is convinced technology is going to solve their problems.  Have an opportunity?  Throw money and computers at the opportunity.  Have a technology answer to your process problems?  Really?.

The business world is changing.  With cloud computing potentially eliminating the need for some current IT roles, such as physical server huggers…, information technology professionals, or more appropriately information and communications technology (ICT) professionals, need to rethink their roles within organizations.

Is it acceptable to simply be a technology specialist, or do ICT professionals also need to be an inherent part of the business process?  Yes, a rhetorical question, and any negative answer is wrong.  ICT professionals are rapidly being relieved of the burden of data centers, servers (physical servers), and a need to focus on ensuring local copies of MS Office are correctly installed, configured, and have the latest service packs or security patches installed.

You can fight the idea, argue the concept, but in reality cloud computing is here to stay, and will only become more important in both the business and financial planning of future organizations.

Now those copies of MS Office are hosted on MS 365 or Google Docs, and your business users are telling you either quickly meet their needs or they will simply bypass the IT organization and use an external or hosted Software as a Service (SaaS) application – in spite of your existing mature organization and policies.

So what is this TOGAF stuff?  Why do we care?

Well…

As it should be, ICT is firmly being set in the organization as a tool to meet business objectives.  We no longer have to consider the limitations or “needs” of IT when developing business strategies and opportunities.  SaaS and Platform as a Service (PaaS) tools are becoming mature, plentiful, and powerful.

Argue the point, fight the concept, but if an organization isn’t at least considering a requirement for data and systems interoperability, the use of large data sets, and implementation of a service-oriented architecture (SOA) they will not be competitive or effective in the next generation of business.

TOGAF, which is “The Open Group Architecture Framework,” brings structure to development of ICT as a tool for meeting business requirements.   TOGAF is a tool which will force each stakeholder, including senior management and business unit management, to work with ICT professionals to apply technology in a structured framework that follows the basic:

  • Develop a business vision
  • Determine your “AS-IS” environment
  • Determine your target environment
  • Perform a gap analysis
  • Develop solutions to meet the business requirements and vision, and fill the “gaps” between “AS-IS” and “Target”
  • Implement
  • Measure
  • Improve
  • Re-iterate
    Of course TOGAF is a complex architecture framework, with a lot more stuff involved than the above bullets.  However, the point is ICT must now participate in the business planning process – and really become part of the business, rather than a vendor to the business.
    As a life-long ICT professional, it is easy for me to fall into indulging in tech things.  I enjoy networking, enjoy new gadgets, and enjoy anything related to new technology.  But it was not until about 10 years ago when I started taking a formal, structured approach to understanding enterprise architecture and fully appreciating the value of service-oriented architectures that I felt as if my efforts were really contributing to the success of an organization.
    TOGAF was one course of study that really benefitted my understanding of the value and role IT plays in companies and government organizations.  TOGAF provide both a process, and structure to business planning.
    You may have a few committed DevOps evangelists who disagree with the structure of TOGAF, but in reality once the “guardrails” are in place even DevOps can be fit into the process.  TOGAF, and other frameworks are not intended to stifle innovation – just encourage that innovation to meet the goals of an organization, not the goals of the innovators.
    While just one of several candidate enterprise architecture frameworks (including the US Federal Enterprise Architecture Framework/FEAF, Dept. of Defense Architecture Framework /DoDAF), TOGAF is now universally accepted, and accompanying certifications are well understood within government and enterprise.

What’s an IT Guy to Do?

    Now we can send the “iterative” process back to the ICT guy’s viewpoint.  Much like telecom engineers who operated DMS 250s, 300s, and 500s, the existing IT and ICT professional corps will need to accept the reality they will either need to accept the concept of cloud computing, or hope they are close to retirement.  Who needs a DMS250 engineer in a world of soft switches?  Who needs a server manager in a world of Infrastructure as a Service?  Unless of course you work as an infrastructure technician at a cloud service provider…
    Ditto for those who specialize in maintaining copies of MS Office and a local MS Exchange server.  Sadly, your time is limited, and quickly running out.  Either become a cloud computing expert, in some field within cloud computing’s broad umbrella of components, or plan to be part of the business process.  To be effective as a member of the organization’s business team, you will need skills beyond IT – you will need to understand how ICT is used to meet business needs, and the impact of a rapidly evolving toolkit offered by all strata of the cloud stack.

Even better, become a leader in the business process.  If you can navigate your way through a TOGAF course and certification, you will acquire a much deeper appreciation for how ICT tools and resources could, and likely should, be planned and employed within an organization to contribute to the success of any individual project, or the re-engineering of ICTs within the entire organization.


John Savageau is TOGAF 9.1 Certified

ICT Modernization Planning

ICT ModernizationThe current technology refresh cycle presents many opportunities, and challenges to both organizations and governments.  The potential of service-oriented architectures, interoperability, collaboration, and continuity of operations is an attractive outcome of technologies and business models available today.  The challenges are more related to business processes and human factors, both of which require organizational transformations to take best advantage of the collaborative environments enabled through use of cloud computing and access to broadband communications.

Gaining the most benefit from planning an interoperable environment for governments and organizations may be facilitated through use of business tools such as cloud computing.  Cloud computing and underlying technologies may create an operational environment supporting many strategic objectives being considered within government and private sector organizations.

Reaching target architectures and capabilities is not a single action, and will require a clear understanding of current “as-is” baseline capabilities, target requirements, the gaps or capabilities need to reach the target, and establishing a clear transitional plan to bring the organization from a starting “as-is” baseline to the target goal.

To most effectively reach that goal requires an understanding of the various contributing components within the transformational ecosystem.  In addition, planners must keep in mind the goal is not implementation of technologies, but rather consideration of technologies as needed to facilitate business and operations process visions and goals.

Interoperability and Enterprise Architecture

Information technology, particularly communications-enabled technology has enhanced business process, education, and the quality of life for millions around the world.  However, traditionally ICT has created silos of information which is rarely integrated or interoperable with other data systems or sources.

As the science of enterprise architecture development and modeling, service-oriented architectures, and interoperability frameworks continue to force the issue of data integration and reuse, ICT developers are looking to reinforce open standards allowing publication of external interfaces and application programming interfaces.

Cloud computing, a rapidly maturing framework for virtualization, standardized data, application, and interface structure technologies, offers a wealth of tools to support development of both integrated and interoperable ICT  resources within organizations, as well as among their trading, shared, or collaborative workflow community.

The Institute for Enterprise Architecture Development defines enterprise architecture (EA) as a “complete expression of the enterprise; a master plan which acts as a collaboration force between aspects of business planning such as goals, visions, strategies and governance principles; aspects of business operations such as business terms, organization structures, processes and data; aspects of automation such as information systems and databases; and the enabling technological infrastructure of the business such as computers, operating systems and networks”

ICT, including utilities such as cloud computing, should focus on supporting the holistic objectives of organizations implementing an EA.  Non-interoperable or shared data will generally have less value than reusable data, and will greatly increase systems reliability and data integrity.

Business Continuity and Disaster Recovery (BCDR)

Recent surveys of governments around the world indicate in most cases limited or no disaster management or continuity of operations planning.  The risk of losing critical national data resources due to natural or man-made disasters is high, and the ability for most governments maintain government and citizen services during a disaster is limited based on the amount of time (recovery time objective/RTO) required to restart government services, as well as the point of data restoral (recovery point objective /RPO).

In existing ICT environments, particularly those with organizational and data resource silos,  RTOs and RPOs can be extended to near indefinite if both a data backup plan, as well as systems and service restoral resource capacity is not present.  This is particularly acute if the processing environment includes legacy mainframe computer applications which do not have a mirrored recovery capacity available upon failure or loss of service due to disaster.

Cloud computing can provide a standards-based environment that fully supports near zero RTO/RPO requirements.  With the current limitation of cloud computing being based on Intel-compatible architectures, nearly any existing application or data source can be migrated into a virtual resource pool.   Once within the cloud computing Infrastructure as a Service (IaaS) environment, setting up distributed processing or backup capacity is relatively uncomplicated, assuming the environment has adequate broadband access to the end user and between processing facilities.

Cloud computing-enabled BCDR also opens opportunities for developing either PPPs, or considering the potential of outsourcing into public or commercially operated cloud computing compute, storage, and communications infrastructure.  Again, the main limitation being the requirement for portability between systems.

Transformation Readiness

ICT modernization will drive change within all organizations.  Transformational readiness is not a matter of technology, but a combination of factors including rapidly changing business models, the need for many-to-many real-time communications, flattening of organizational structures, and the continued entry of technology and communications savvy employees into the workforce.

The potential of outsourcing utility compute, storage, application, and communications will eliminate the need for much physical infrastructure, such as redundant or obsolete data centers and server closets.  Roles will change based on the expected shift from physical data centers and ICT support hardware to virtual models based on subscriptions and catalogs of reusable application and process artifacts.

A business model for accomplishing ICT modernization includes cloud computing, which relies on technologies such as server and storage resource virtualization, adding operational characteristics including on-demand resource provisioning to reduce the time needed to procure ICT resources needed to respond to emerging operational  or other business opportunities.

IT management and service operations move from a workstation environment to a user interface driven by SaaS.  The skills needed to drive ICT within the organization will need to change, becoming closer to the business, while reducing the need to manage complex individual workstations.

IT organizations will need to change, as organizations may elect to outsource most or all of their underlying physical data center resources to a cloud service provider, either in a public or private environment.  This could eliminate the need for some positions, while driving new staffing requirements in skills related to cloud resource provisioning, management, and development.

Business unit managers may be able to take advantage of other aspects of cloud computing, including access to on-demand compute, storage, and applications development resources.  This may increase their ability to quickly respond to rapidly changing market conditions and other emerging opportunities.   Business unit managers, product developers, and sales teams will need to become familiar with their new ICT support tools.  All positions from project managers to sales support will need to quickly acquire skills necessary to take advantage of these new tools.

The Role of Cloud Computing

Cloud computing is a business representation of a large number of underlying technologies.  Including virtualization, development environment, and hosted applications, cloud computing provides a framework for developing standardized service models, deployment models, and service delivery characteristics.

The US National Institute of Standards and Technology (NIST) provides a definition of cloud computing accepted throughout the ICT industry.

“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.“

While organizations face decisions related to implementing challenges related to developing enterprise architectures and interoperability, cloud computing continues to rapidly develop as an environment with a rich set of compute, communication, development, standardization, and collaboration tools needed to meet organizational objectives.

Data security, including privacy, is different within a cloud computing environment, as the potential for data sharing is expanded among both internal and potentially external agencies.  Security concerns are expanded when questions of infrastructure multi-tenancy, network access to hosted applications (Software as a Service / SaaS), and governance of authentication and authorization raise questions on end user trust of the cloud provider.

A move to cloud computing is often associated with data center consolidation initiatives within both governments and large organizations.  Cloud delivery models, including Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) support the development of virtual data centers.

While it is clear long term target architectures for most organizations will be an environment with a single data system, in the short term it may be more important to decommission high risk server closets and unmanaged servers into a centralized, well-managed data center environment offering on-demand access to compute, storage, and network resources – as well as BCDR options.

Even at the most basic level of considering IaaS and PaaS as a replacement environment to physical infrastructure, the benefits to the organization may become quickly apparent.  If the organization establishes a “cloud first” policy to force consolidation of inefficient or high risk ICT resources, and that environment further aligns the organization through the use of standardized IT components, the ultimate goal of reaching interoperability or some level of data integration will become much easier, and in fact a natural evolution.

Nearly all major ICT-related hardware and software companies are re-engineering their product development to either drive cloud computing, or be cloud-aware.  Microsoft has released their Office 365 suite of online and hosted environments, as has Google with both PaaS and SaaS tools such as the Google Apps Engine and Google Docs.

The benefits of organizations considering a move to hosted environments, such as MS 365, are based on access to a rich set of applications and resources available on-demand, using a subscription model – rather than licensing model, offering a high level of standardization to developers and applications.

Users comfortable with standard office automation and productivity tools will find the same features in a SaaS environment, while still being relieved of individual software license costs, application maintenance, or potential loss of resources due to equipment failure or theft.  Hosted applications also allow a persistent state, collaborative real-time environment for multi-users requiring access to documents or projects.  Document management and single source data available for reuse by applications and other users, reporting, and performance management becomes routine, reducing the potential and threat of data corruption.

The shortfalls, particularly for governments, is that using a large commercial cloud infrastructure and service provider such as Microsoft  may require physically storing data in location outside of their home country, as well as forcing data into a multi-tenant environment which may not meet security requirements for organizations.

Cloud computing offers an additional major feature at the SaaS level that will benefit nearly all organizations transitioning to a mobile workforce.  SaaS by definition is platform independent.  Users access SaaS applications and underlying data via any device offering a network connection, and allowing access to an Internet-connected address through a browser.    The actual intelligence in an application is at the server or virtual server, and the user device is simply a dumb terminal displaying a portal, access point, or the results of a query or application executed through a command at the user screen.

Cloud computing continues to develop as a framework and toolset for meeting business objectives.  Cloud computing is well-suited to respond to rapidly changing business and organizational needs, as the characteristics of on-demand access to infrastructure resources, rapid elasticity, or the ability to provision and de-provision resources as needed to meet processing and storage demand, and organization’s ability to measure cloud computing resource use for internal and external accounting mark a major change in how an organization budgets ICT.

As cloud computing matures, each organization entering a technology refresh cycle must ask the question “are we in the technology business, or should we concentrate our efforts and budget in efforts directly supporting realizing objectives?”  If the answer is the latter, then any organization should evaluate outsourcing their ICT infrastructure to an internal or commercial cloud service provider.

It should be noted that today most cloud computing IaaS service platforms will not support migration of mainframe applications, such as those written for a RISC processor.  Those application require redevelopment to operate within an Intel-compatible processing environment.

Broadband Factor

Cloud computing components are currently implemented over an Internet Protocol network.  Users accessing SaaS application will need to have network access to connect with applications and data.  Depending on the amount of graphics information transmitted from the host to an individual user access terminal, poor bandwidth or lack of broadband could result in an unsatisfactory experience.

In addition, BCDR requires the transfer of potentially large amounts of data between primary and backup locations. Depending on the data parsing plan, whether mirroring data, partial backups, full backups, or live load balancing, data transfer between sites could be restricted if sufficient bandwidth is not available between sites.

Cloud computing is dependent on broadband as a means of connecting users to resources, and data transfer between sites.  Any organization considering implementing cloud computing outside of an organization local area network will need to fully understand what shortfalls or limitations may result in the cloud implementation not meeting objectives.

The Service-Oriented Cloud Computing Infrastructure (SOCCI)

Governments and other organizations are entering a technology refresh cycle based on existing ICT hardware and software infrastructure hitting the end of life.  In addition, as the world aggressively continues to break down national and technical borders, the need for organizations to reconsider the creation, use, and management of data supporting both mission critical business processes, as well as decision support systems will drive change.

Given the clear direction industry is taking to embrace cloud computing services, as well as the awareness existing siloed data structures within many organizations would better serve the organization in a service-oriented  framework, it makes sense to consider an integrated approach.

A SOCCI considers both, adding reference models and frameworks which will also add enterprise architecture models such as TOGAF to ultimately provide a broad, mature framework to support business managers and IT managers in their technology and business refresh planning process.

SOCCIs promote the use of architectural building blocks, publication of external interfaces for each application or data source developed, single source data, reuse of data and standardized application building block, as well as development and use of enterprise service buses to promote further integration and interoperability of data.

A SOCCI will look at elements of cloud computing, such as virtualized and on-demand compute/storage resources, and access to broadband communications – including security, encryption, switching, routing, and access as a utility.  The utility is always available to the organization for use and exploitation.  Higher level cloud components including PaaS and SaaS add value, in addition to higher level entry points to develop the ICT tools needed to meet the overall enterprise architecture and service-orientation needed to meet organizational needs.

According to the Open Group a SOCCI framework provides the foundation for connecting a service-oriented infrastructure with the utility of cloud computing.  As enterprise architecture and interoperability frameworks continue to gain in value and importance to organizations, this framework will provide additional leverage to make best use of available ICT tools.

The Bottom Line on ICT Modernization

The Internet Has reached nearly every point in the world, providing a global community functioning within an always available, real-time communications infrastructure.  University and primary school graduates are entering the workforce with social media, SaaS, collaboration, and location transparent peer communities diffused in their tacit knowledge and experience.

This environment has greatly flattened any leverage formerly developed countries, or large monopoly companies have enjoyed during the past several technology and market cycles.

An organization based on non-interoperable or standardized data, and no BCDR protection will certainly risk losing a competitive edge in a world being created by technology and data aware challengers.

Given the urgency organizations face to address data security, continuity of operations, agility to respond to market conditions, and operational costs associated with traditional ICT infrastructure, many are looking to emerging technology frameworks such as cloud computing to provide a model for planning solutions to those challenges.

Cloud computing and enterprise architecture frameworks provide guidance and a set of tools to assist organizations in providing structure, and infrastructure needed to accomplish ICT modernization objectives.

Data Center Consolidation and Adopting Cloud Computing in 2013

Throughout 2012 large organizations and governments around the world continued to struggle with the idea of consolidating inefficient data centers, server closets, and individual “rogue” servers scattered around their enterprise or government agencies.  Issues dealt with the cost of operating data centers, disaster management of information technology resources, and of course human factors centered on control, power, or retention of jobs in a rapidly evolving IT industry.

Cloud computing and virtualization continue to have an impact on all consolidation discussions, not only from the standpoint of providing a much better model for managing physical assets, but also in the potential cloud offers to solve disaster recovery shortfalls, improve standardization, and encourage or enable development of service-oriented architectures.

Our involvement in projects ranging from local, state, and national government levels in both the United States and other countries indicates a consistent need for answering the following concerns:

  • Existing IT infrastructure, including both IT and facility, is reaching the end of its operational life
  • Collaboration requirements between internal and external users are expanding quickly, driving an architectural need for interoperability
  • Decision support systems require access to both raw data, and “big data/archival data”

We would like to see an effort within the IT community to move in the following directions:

  1. Real effort at decommissioning and eliminating inefficient data centers
  2. All data and applications should be fit into an enterprise architecture framework – regardless of the size of organization or data
  3. Aggressive development of standards supporting interoperability, portability, and reuse of objects and data

Regardless of the very public failures experienced by cloud service providers over the past year, the reality is cloud computing as an IT architecture and model is gaining traction, and is not likely to go away any time soon.  As with any emerging service or technology, cloud services will continue to develop and mature, reducing the impact and frequency of failures.

Future Data CentersWhy would an organization continue to buy individual high powered workstations, individual software licenses, and device-bound storage when the same application can be delivered to a simple display, or wide variety of displays, with standardized web-enabled cloud (SaaS) applications that store mission critical data images on a secure storage system at a secure site?  Why not facilitate the transition from CAPEX to OPEX, license to subscription, infrastructure to product and service development?

In reality, unless an organization is in the hardware or software development business, there is very little technical justification for building and managing a data center.  This includes secure facilities supporting military or other sensitive sites.

The cost of building and maintaining a data center, compared with either outsourcing into a commercial colocation site – or virtualizing data, applications, and network access requirements has gained the attention of CFOs and CEOs, requiring IT managers to more explicitly justify the cost of building internal infrastructure vs. outsourcing.  This is quickly becoming a very difficult task.

Money spent on a data center infrastructure is lost to the organization.  The cost of labor is high, the cost of energy, space, and maintenance is high.  Mooney that could be better applied to product and service development, customer service capacity, or other revenue and customer-facing activities.

The Bandwidth Factor

The one major limitation the IT community will need to overcome as data center consolidation continues and cloud services become the ‘norm, is bandwidth.  Applications, such as streaming video, unified communications, and data intensive applications will need more bandwidth.  The telecom companies are making progress, having deployed 100gbps backbone capacity in many markets.  However this capacity will need to continue growing quickly to meet the needs of organizations needing to access data and applications stored or hosted within a virtual or cloud computing environment.

Consider a national government’s IT requirements.  If the government, like most, are based within a metro area.  The agencies and departments consolidate their individual data centers and server closets into a central or reduced number of facilities.   Government interoperability frameworks begin to make small steps allowing cross-agency data sharing, and individual users need access to a variety of applications and data sources needed to fulfill their decision support requirements.

For example, a GIS (Geospatial/Geographic Information System) with multiple demographic or other overlays.  Individual users will need to display data that may be drawn from several data sources, through GIS applications, and display a large amount of complex data on individual display screens.  Without broadband access between both the user and application, as well as application and data sources, the result will be a very poor user experience.

Another example is using the capabilities of video conferencing, desktop sharing, and interactive persistent-state application sharing.  Without adequate bandwidth this is simply not possible.

Revisiting the “4th Utility” for 2013

The final vision on the 2013 “wishlist” is that we, as an IT industry, continue to acknowledge the need for developing the 4th Utility.  This is the idea that broadband communications, processing capacity (including SaaS applications), and storage is the right of all citizens.  Much like the first three utilities, roads, water, and electricity, the 4th Utility must be a basic part of all discussions related to national, state, or local infrastructure discussions.  As we move into the next millennium, Internet-enabled, or something like Internet-enabled communications will be an essential part of all our lives.

The 4th Utility requires high capacity fiber optic infrastructure and broadband wireless be delivered to any location within the country which supports a community or individual connected to a community.   We’ll have to [pay a fee to access the utility (same as other utilities), but it is our right and obligation to deliver the utility.

2013 will be a lot of fun for us in the IT industry.  Cloud computing is going to impact everybody – one way or the other.  Individual data centers will continue to close.  Service-oriented architectures, enterprise architecture, process modeling, and design efficiency will drive a lot of innovation.   – We’ll lose some players, gain players, and and we’ll be in a better position at the end of 2013 than today.

Gartner Data Center Conference Looks Into Open Source Clouds and Data Backup

LV-2Day two of the Gartner Data Center Conference in Las Vegas continued reinforcing old topics, appearing at times to be either enlist attendees in contributing to Gartner research, or simply providing conference content directed to promoting conference sponsors.

For example, sessions “To the Point:  When Open Meets Cloud” and “Backup/Recovery: Backing Up the Future” included a series of audience surveys.  Those surveys were apparently the same as presented, in the same sessions, for several years.  Thus the speaker immediately referenced this year’s results vs. results from the same survey questions from the past two years.  This would lead a casual attendee to believe nothing radically new is being presented in the above topics, and the attendees are generally contributing to further trend analysis research that will eventually show up in a commercial Gartner Research Note.

Gartner analyst and speaker on the topic of “When Open Meets Clouds,” Aneel Lakhani, did make a couple useful, if not obvious points in his presentation.

  • We cannot secure complete freedom from vendors, regardless of how much you adopt open source
  • Open source can actually be more expensive than commercial products
  • Interoperability is easy to say, but a heck of a lot more complicated to implement
  • Enterprise users have a very low threshold for “test” environments (sorry DevOps guys)
  • If your organization has the time and staff, test, test, and test a bit more to ensure your open source product will perform as expected or designed

However analyst Dave Russell, speaker on the topic of “Backup/Recovery” was a bit more cut and paste in his approach.  Lots of questions to match against last year’s conference, and a strong emphasis on using tape as a continuing, if not growing media for disaster recovery.

Problem with this presentation was the discussion centered on backing up data – very little on business continuity.  In fact, in one slide he referenced a recovery point objective (RPO) of one day for backups.   What organization operating in a global market, in Internet time, can possibly design for a one day RPO?

In addition, there was no discussion on the need for compatible hardware in a disaster recovery site that would allow immediate or rapid restart of applications.  Having data on tape is fine.  Having mainframe archival data is fine.  But without a business continuity capability, it is likely any organization will suffer significant damage in their ability to function in their marketplace.  Very few organizations today can absorb an extended global presence outage or marketplace outage.

The conference continues until Thursday and we will look for more, positive approaches, to data center and cloud computing.

Gartner Data Center Conference Yields Few Surprises

Gartner’s 2012 Data Center Conference in Las Vegas is noted for  not yielding any major surprise.  While having an uncanny number of attendees (*the stats are not available, however it is clear they are having a very good conference), most of the sessions appear to be simply reaffirming what everybody really knows already, serving to reinforce the reality data center consolidation, cloud computing, big data, and the move to an interoperable framework will be part of everybody’s life within a few years.

Childs at Gartner ConferenceGartner analyst Ray Paquet started the morning by drawing a line at the real value of server hardware in cloud computing.  Paquet stressed that cloud adopters should avoid integrated hardware solutions based on blade servers, which carry a high margin, and focus their CAPEX on cheaper “skinless” servers.  Paquet emphasized that integrated solutions are a “waste of money.”

Cameron Haight, another Gartner analyst, fired a volley at the process and framework world, with a comparison of the value DevOps brings versus ITIL.  Describing ITIL as a cumbersome burden to organizational agility, DevOps is a culture-changer that allows small groups to quickly respond to challenges.  Haight emphasized the frequently stressful relationship between development organizations and operations organizations, where operations demands stability and quality, and development needs freedom to move projects forward, sometimes without the comfort of baking code to the standards preferred by operations – and required by frameworks such as ITIL.

Haight’s most direct slide described De Ops as being “ITIL minus CRAP.”  Of course most of his supporting slides for moving to DevOps looked eerily like an ITIL process….

Other sessions attended (by the author) included “Shaping Private Clouds,” a WIPRO product demonstration, and a data center introduction by Raging Wire.  All valuable introductions for those who are considering making a major change in their internal IT deployments, but nothing cutting edge or radical.

The Raging Wire data center discussion did raise some questions on the overall vulnerability of large box data centers.  While it is certainly possible to build a data center up to any standard needed to fulfill a specific need, the large data center clusters in locations such as Northern Virginia are beginning to appear very vulnerable to either natural, human, or equipment failure disruptions.  In addition to fulfilling data center tier classification models as presented by the Uptime Institute, it is clear we are producing critical national infrastructure which if disrupted could cause significant damage to the US economy or even social order.

Eventually, much like the communications infrastructure in the US, data centers will need to come under the observation or review of a national agency such as Homeland Security.  While nobody wants a government officer in the data center, protection of national infrastructure is a consideration we probably will not be able to avoid for long.

Raging Wire also noted that some colocation customers, particularly social media companies, are hitting up to 8kW per cabinet.  Also scary if true, and in extended deployments.  This could result in serious operational problems if cooling systems were disrupted, as the heat generated in those cabinets will quickly become extreme.  Would also be interesting if companies like Raging Wire and other colocation companies considered developing a real time CFD monitor for their data center floors allowing better monitoring and predictability than simple zone monitoring solutions.

The best presentation of the day came at the end, “Big Data is Coming to Your Data Center.”  Gartner’s Sheila Childs brought color and enthusiasm to a topic many consider, well, boring.  Childs was able to bring the value, power, and future of big data into a human consumable format that kept the audience in their seats until the end of session at 6 p.m. in the late afternoon.

Childs hit on concepts such as “dark data” within organizations, the value of big data in decision support systems (DSS), and the need for developing and recruiting skilled staff who can actually write or build the systems needed to fully exploit the value of big data.  We cannot argue that point, and can only hope our education system is able to focus on producing graduates with the basic skills needed to fulfill that requirement.

5 Data Center Technology Predictions for 2012

2011 was a great year for technology innovation.  The science of data center design and operations continued to improve, the move away from mixed-use buildings used as data centers continued, the watts/sqft metric took a second seat to overall kilowatts available to a facility or customer, and the idea of compute capacity and broadband as a utility began to take its place as a basic right of citizens.

However, there are 5 areas where we will see additional significant advances in 2012.

1.  Data Center Consolidation.  The US Government admits it is using only 27% of its overall available compute power.  With 2094 data centers supporting the federal government (from the CIO’s 25 Point Plan  to Reform Fed IT Mgt), the government is required to close at least 800 of those data centers by 2015.

Data Center ConstructionThe lesson is not lost on state and local governments, private industry, or even internet content providers.  The economics of operating a data center or server closet, whether in costs of real estate, power, hardware, in addition to service and licensing agreements, are compelling enough to make even the most fervent server-hugger reconsider their religion.

2.  Cloud Computing.  Who doesn’t believe cloud computing will eventually replace the need for a server closets, cabinets, or even small cages in data centers?  The move to cloud computing is as certain as the move to email was in the 1980s. 

Some IT managers and data owners hate the idea of cloud computing, enterprise service busses, and consolidated data.  Not so much an issue of losing control, but in many cases because it brings transparency to their operation.  If you are the owner of data in a developing country, and suddenly everything you do can be audited by a central authority – well it might make you uncomfortable…

A lesson learned while attending a  fast pitch contest during late 2009 in Irvine, CA…  An enterprising entrepreneur gave his “pitch” to a panel of investment bankers and venture capital representatives.  He stated he was looking for a $5 million investment in his startup company. 

A panelist asked what the money was for, and the entrepreneur stated “.. and $2 million to build out a data center…”  The panelist responded that 90% of new companies fail within 2 years.  Why would he want to be stuck with the liability of a data center and hardware if the company failed? The gentleman further stated, “don’t waste my money on a data center – do the smart thing, use the Amazon cloud.”

3.  Virtual Desktops and Hosted Office Automation.  How many times have we lost data and files due to a failed hard drive, stolen laptop, or virus disrupting our computer?  What is the cost or burden of keeping licenses updated, versions updated, and security patches current in an organization with potentially hundreds of users?  What is the lead time when a user needs a new application loaded on a computer?

From applications as simple as Google Docs, to Microsoft 365, and other desktop replacement applications suites, users will become free from the burden of carrying a heavy laptop computer everywhere they travel.  Imagine being able to connect your 4G/LTE phone’s HDMI port to a hotel widescreen television monitor, and be able to access all the applications normally used at a desktop.  You can give a presentation off your phone, update company documents, or nearly any other IT function with the only limitation being a requirement to access broadband Internet connections (See # 5 below).

Your phone can already connect to Google Docs and Microsoft Live Office, and the flexibility of access will only improve as iPads and other mobile devices mature.

The other obvious benefit is files will be maintained on servers, much more likely to be backed up and included in a disaster recovery plan.

4.  The Science of Data Centers.  It has only been a few years since small hosting companies were satisfied to go into a data center carved out of a mixed-use building, happy to have access to electricity, cooling, and a menu of available Internet network providers.  Most rooms were Data Center Power Requirementsdesigned to accommodate 2~3kW per cabinet, and users installed servers, switches, NAS boxes, and routers without regard to alignment or power usage.

That has changed.  No business or organization can survive without a 24x7x265 presence on the Internet, and most small enterprises – and large enterprises, are either consolidating their IT into professionally managed data centers, or have already washed their hands of servers and other IT infrastructure.

The Uptime Institute, BICSI, TIA, and government agencies have begun publishing guidelines on data center construction providing best practices, quality standards, design standards, and even standards for evaluation.  Power efficiency using metrics such as the PUE/DCiE provide additional guidance on power management, data center management, and design. 

The days of small business technicians running into a data center at 2 a.m. to install new servers, repair broken servers, and pile their empty boxes or garbage in their cabinet or cage on the way out are gone.  The new data center religion is discipline, standards, discipline, and security. 

Electricity is as valuable as platinum, just as cooling and heat are managed more closely than inmates at San Quentin.  While every other standards organization is now offering certification in cabling, data center design, and data center management, we can soon expect universities to offer an MS or Ph.D in data center sciences.

5.  The 4th Utility Gains Traction.  Orwell’s “1984” painted a picture of pervasive government surveillance, and incessant public mind control (Wikipedia).  Many people believe the Internet is the source of all evil, including identity theft, pornography, crime, over-socialization of cultures and thoughts, and a huge intellectual time sink that sucks us into the need to be wired or connected 24 hours a day.

Yes, that is pretty much true, and if we do not consider the 1000 good things about the Internet vs. each 1 negative aspect, it might be a pretty scary place to consider all future generations being exposed and indoctrinated.  The alternative is to live in a intellectual Brazilian or Papuan rain forest, one step out of the evolutionary stone age.

The Internet is not going away, unless some global repressive government, fundamentalist religion, or dictator manages to dismantle civilization as we know it.

The 4th utility identifies broadband access to the ‘net as a basic right of all citizens, with the same status as roads, water, and electricity.  All governments with a desire to have their nation survive and thrive in the next millennium will find a way to cooperate with network infrastructure providers to build out their national information infrastructure (haven’t heard that term since Al Gore, eh?).

Without a robust 4th utility, our children and their children will produce a global generation of intellectual migrant workers, intellectual refugees from a failed national information sciences vision and policy.

2012 should be a great year.  All the above predictions are positive, and if proved true, will leave the United States and other countries with stronger capacities to improve their national quality of life, and bring us all another step closer.

Happy New Year!

Hunter Newby on Communications in America – Are We Competitive?

This is Part 1 in a series highlighting Hunter Newby’s thoughts and visions of communications in America.  Part 1 will highlight Newby’s impressions of America’s competitiveness in the global telecom-enabled community.  Additional articles will touch on net neutrality, the “ying and yang” of the telecom industry, as well as  the dilemma of supporting telecom “end points.”

HunterNewbyMembers and guests of the Internet Society gathered at Sentry Center in New York on 14 June for the regional INET Conference.  The topic, “It’s your call, What kind of Internet do you want?” attracted Internet legends including Vint Cerf and Sir Timothy John “Tim” Berners-Lee, as well as a number of distinguished speakers and panelists representing a wide range of industry sectors.

Hunter Newby, Founder and CEO of Allied Fiber, joined the panel “Pushing Technology Boundaries” to discuss the future of Internet-enabled innovation.  The panel had robust discussions on many topics including net neutrality, infrastructure, telecom law, regulation, and the role of service providers.

Pacific-Tier Communications caught up with Newby on 22 June to learn more about his views on communications in America.

Are We Competitive?

Newby believes America lags behind other nations in developing the infrastructure needed to compete in a rapidly developing global community.  Much of the shortfall is related to physical telecommunications infrastructure needed to connect networks, people, content, and machines at the same level as other countries in Asia and Europe.

“The US lacks an appreciation for the need to understand physical (telecom) infrastructure” said Newby.  He went on to describe the lack of standard terms in the US, such as “Broadband Communications.” Newby continued “In some locations, such as North Carolina, broadband communications are considered anything over 128Kbps (Kilobits per second).”

Newby note there is considerable disinformation in the media related to the US communications infrastructure.  Although the US does have a national broadband plan, in reality the infrastructure is being built by companies with a priority to meet the needs of shareholders. Those priorities do not necessarily reflect the overall needs the American people.

While some companies have made great progress bringing high performance telecom and Internet access to individual cities and towns, Newby is quick to remind us that “we cannot solve telecom problems in a single  city or location, and (use that success) to declare victory as a country.”  Without having a national high performance broadband and network infrastructure, the US will find it difficult to continue attracting the best talent to our research labs and companies, eroding our competitiveness not only in communications, but also as a country and economy.

Newby returns to a recurring theme in his discussions on communications.  There are no connectivity “clouds” as commonly shown in presentations and documents related to the space between end points in the Internet (an end point being users, servers, applications, etc.).  The connectivity between end points happens on physical “patch panels,” telecom switches, and routers.  This happens in the street, at the data center, carrier hotel, central office, or exchange point.

Bringing it All Down to Layer 1 – Optical Fiber

Newby believes the basis of all discussions related to communications infrastructure starts at the right of way.  When access to a ground or aerial right of way (or easement) is secured, then install fiber optic cable.  Lots of fiber optic cable.  Long haul fiber, metro fiber, and transoceanic submarine fiber.  Fiber optic cable allows tremendous amounts of information to travel from end points to other end points, whether in a local area, or across wide geographies.

Long distance and submarine fiber optic cable are essential in providing the infrastructure needed to move massive amounts of information and data throughout the US and the world.  While there is still a large amount of communications provided via satellite and microwave, only fiber optic cable has the resources and capacity needed to move data supporting communications within the network or Internet-enabled community.

Newby makes a point that in the US, very few companies operate long haul fiber networks, and those companies control access to their communications infrastructure with tariffs based on location, distance, traffic volumes (bandwidth/ports), and types of traffic.  Much of the existing fiber optic infrastructure crossing the US is old, and cannot support emerging communication transmission rates and technologies, limiting choices and competitiveness to a handful of companies – none of which provide fiber as a utility or as a neutral tariffed product.

As the cost of long distance or long haul fiber is extremely high, most carriers do not want to carry the expense of building their own new fiber optic infrastructure, and prefer to lease capacity from other carriers.  However, the carriers owning long haul fiber do not want to lease or sell their capacity to potentially competitive communications carriers.

Most US communications carriers operating their own long haul fiber optic networks also provide additional value-added services to their markets.  This might include voice services, cable or IP television, virtual private networks, and Internet access.  Thus the carrier is reluctant to lease their capacity to other competitive or virtual carriers who may compete with them in individual or global  markets.

Thus a dilemma – how do we build the American fiber backbone infrastructure to a level needed to provide a competitive, high capacity national infrastructure without aggressive investment in new fiber routes?

Newby has responded to the dilemma and challenge with his company Allied Fiber, and advises “the only way to properly build the physical infrastructure required to support all of this (infrastructure need) is to have a unique model at the fiber layer similar to what Allied (Allied Fiber) has, but not solely look at fiber as the only source of revenue.”

For example, Newby advises revenue can be supplemented by offering interconnecting carriers and other network or content providers space in facilities adjacent to the backbone fiber traditionally used for only in-line-amplifiers (ILAs) and fiber optic signal regeneration.  The ILA facility itself “could be an additional source of recurring revenue,” while allowing the fiber provider to remain a neutral utility.

Or in short, Newby explains “we need to put a 60 Hudson or One Wilshire every 60 miles” to allow unrestricted interconnection between carriers, networks, and content providers at a location closest to the infrastructure supporting end points.

The Backbone

America can compete, and break the long distance dilemma.  Newby is certain this is possible, and has a plan to bring the US infrastructure up to his highest standards.  The idea is really pretty simple.

  1. Build a high capacity fiber optic backbone passing through all major markets within the US.
  2. Connect the backbone to local metro fiber networks (reference the Dark Fiber Community)
  3. Connect the backbone to wireless networks and towers (and provide the access location)
  4. Connect the backbone to all major physical interconnection points, carrier hotels, and Internet Exchange Points (IXPs)
  5. Make access to the backbone available to all as a neutral, infrastructure utility

Newby strongly advises “If you do not understand the root of the issue, you are not solving the real problems.”

And the root of the issue is to ensure everybody in America has unrestricted access to unrestricted communications resources.


Hunter Newby, a 15-year veteran of the telecom networking industry, is the Founder and CEO of Allied Fiber.

Read other articles in this series, including:

%d bloggers like this: