PTC 2015 Wraps Up with Strong Messages on SDNs and Automation

Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.

Carrier SDNs SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.

While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.

Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.

Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards.  In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:

  • How to differentiate their service offering
  • How to differentiate their operations environment
  • How to ensure industry-acceptable delivery and provisioning time cycles
  • How to deal with legacy deployments

Alexander also emphasized that as an industry we need to get away from physical wiring when possible.   With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.

The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire).  Some examples of actual and required deployments included:

  • A bandwidth on-demand marketplace
  • Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
  • Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services

Robust discussions on standards also spawned debated.  With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards.  Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.

During the PTC 2015 session entitled  “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:

  • Open Contrail
  • Open Daylight
  • Open Stack
  • Open Flow
  • OPNFV
  • ONOS
  • OvS
  • Project Floodlight
  • Open Networking
  • and on and on….

For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service  marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.

For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.

The connection and integration of cloud services with telecom or network services is quite clear.  At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.

While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do.  In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”

Oh my…

Nurturing the Marriage of Cloud Computing and SOAs

In 2009 we began consulting jobs with governments in developing countries with the primary objective to consolidate data centers across government ministries and agencies into centralized, high capacity and quality data centers.  At the time, nearly all individual ministry or agency data infrastructure was built into either small computers rooms or server closets with some added “brute force” air conditioning, no backup generators, no data back up, superficial security, and lots of other ailments.

CC-SOA The vision and strategy was that if we consolidated inefficient, end of life, and high risk IT infrastructure into a standardized and professionally managed facility, national information infrastructure would not only be more secure, but through standardization, volume purchasing agreements, some server virtualization, and development of broadband infrastructure most of the IT needs of government would be easily fulfilled.

Then of course cloud computing began to mature, and the underlying technologies of Infrastructure as a Service (IaaS) became feasible.  Now, not only were the governments able to decommission inefficient and high-risk IS environments, they would also be able to build virtual data centers  with levels of on-demand compute, storage, and network resources.  Basic data center replacement.

Even those remaining committed “server hugger” IT managers and fiercely independent governmental organizations cloud hardly argue the benefits of having access to disaster recovery storage capacity though the centralized data center.

As the years passed, and we entered 2014, not only did cloud computing mature as a business model, but senior management began to increase their awareness of various aspects of cloud computing, including the financial benefits, standardization of IT resources, the characteristics of cloud computing, and potential for Platform and Software as a Service (PaaS/SaaS) to improve both business agility and internal decision support systems.

At the same time, information and organizational architecture, governance, and service delivery frameworks such as TOGAF, COBIT, ITIL, and Risk Analysis training reinforced the value of both data and information within an organization, and the need for IT systems to support higher level architectures supporting decision support systems and market interactions (including Government to Government, Business, and Citizens for the public sector) .

2015 will bring cloud computing and architecture together at levels just becoming comprehensible to much of the business and IT world.  The open Group has a good first stab at building a standard for this marriage with their Service-Oriented Cloud Computing Infrastructure (SOCCI). According to the SOCCI standard,

“Infrastructure is a foundational element for enterprise architecture. Infrastructure has been  traditionally provisioned in a physical manner. With the evolution of virtualization technologies  and application of service-orientation to infrastructure, it can now be offered as a service.

Service-orientation principles originated in the business and application architecture arena. After  repeated, successful application of these principles to application architecture, IT has evolved to  extending these principles to the infrastructure.”

At first glance the SOCII standard appears to be a document which creates a mapping between enterprise architecture (TOGAF) and cloud computing.  At second glance the SOCCI standard really steps towards tightening the loose coupling of standard service-oriented architectures through use of cloud computing tools included with all service models (IaaS/PaaS/SaaS).

The result is an architectural vision which is easily capable of absorbing existing IT requirements, as well as incorporating emerging big data analytics models, interoperability, and enterprise architecture.

Since the early days of 2009 discussion topics with government and enterprise customers have shown a marked transition from simply justifying decommissioning of high risk data centers to how to manage data sharing, interoperability, or the potential for over standardization and other service delivery barriers which might inhibit innovation – or ability of business units to quickly respond to rapidly changing market opportunities.

2015 will be an exciting year for information and communications technologies.  For those of us in the consulting and training business, the new year is already shaping up to be the busiest we have seen.

It is Time to Get Serious about Architecting ICT

Just finished another ICT-related technical assistance visit with a developing country government. Even in mid-2014, I spend a large amount of time teaching basic principles of enterprise architecture, and the need for adding form and structure to ICT strategies.

Service-oriented architectures (SOA) have been around for quite a long time, with some references going back to the 1980s. ITIL, COBIT, TOGAF, and other ICT standards or recommendations have been around for quite a long time as well, with training and certifications part of nearly every professional development program.

So why is the idea of architecting ICT infrastructure still an abstract to so many in government and even private industry? It cannot be the lack of training opportunities, or publicly available reference materials. It cannot be the lack of technology, or the lack of consultants readily willing to assist in deploying EA, SOA, or interoperability within any organization or industry cluster.

During the past two years we have run several Interoperability Readiness Assessments within governments. The assessment initially takes the form of a survey, and is distributed to a sample of 100 or more participants, with positions ranging from administrative task-based workers, to Cxx or senior leaders within ministries and government agencies.

Questions range from basic ICT knowledge to data sharing, security, and decision support systems.

While the idea of information silos is well-documented and understood, it is still quite surprising to see “siloed” attitudes are still prevalent in modern organizations.  Take the following question:

Question on Information Sharing

This question did not refer to sharing data outside of the government, but rather within the government.  It indicates a high lack of trust when interacting with other government agencies, which will of course prevent any chance of developing a SOA or facilitating information sharing among other agencies.  The end result is a lower level of both integrity and value in national decision support capability.

The Impact of Technology and Standardization

Most governments are considering or implementing data center consolidation initiatives.  There are several good reasons for this, including:

  • Cost of real estate, power, staffing, maintenance, and support systems
  • Transition from CAPEX-based ICT infrastructure to OPEX-based
  • Potential for virtualization of server and storage resources
  • Standardized cloud computing resources

While all those justifications for data center consolidation are valid, the value potentially pales in comparison of the potential of more intelligent use of data across organizations, and even externally to outside agencies.  To get to this point, one senior government official stated:

“Government staff are not necessarily the most technically proficient.  This results in reliance on vendors for support, thought leadership, and in some cases contractual commitments.  Formal project management training and certification are typically not part of the capacity building of government employees.

Scientific approaches to project management, especially ones that lend themselves to institutionalization and adoption across different agencies will ensure a more time-bound and intelligent implementation of projects. Subsequently, overall knowledge and technical capabilities are low in government departments and agencies, and when employees do gain technical proficiency they will leave to join private industry.”

There is also an issue with a variety of international organizations going into developing countries or developing economies, and offering no or low cost single-use ICT infrastructure, such as for health-related agencies, which are not compatible with any other government owned or operated applications or data sets.

And of course the more this occurs, the more difficult it is for government organizations to enable interoperability or data sharing, and thus the idea of an architecture or data sharing become either impossible or extremely difficult to implement or accomplish.

The Road to EA, SOAs, and Decision Support

There are several actions to take on the road to meeting our ICT objectives.

  1. Include EA, service delivery (ITIL), governance (COBIT), and SOA training in all university and professional ICT education programs.  It is not all about writing code or configuring switches, we need to ensure a holistic understanding of ICT value in all ICT education, producing a higher level of qualified graduates entering the work force.
  2. Ensure government and private organizations develop or adopt standards or regulations which drive enterprise architecture, information exchange models, and SOAs as a basic requirement of ICT planning and operations.
  3. Ensure executive awareness and support, preferably through a formal position such as the Chief Information Officer (CIO).  Principles developed and published via the CIO must be adopted and governed by all organizations,
    Nobody expects large organizations, in particular government organizations, to change their cultures of information independence overnight.  This is a long term evolution as the world continues to better understand the value and extent of value within existing data sets, and begin creating new categories of data.  Big data, data analytics, and exploitation of both structured and unstructured data will empower those who are prepared, and leave those who are not prepared far behind.
    For a government, not having the ability to access, identify, share, analyze, and address data created across agencies will inhibit effective decision support, with potential impact on disaster response, security, economic growth, and overall national quality of life.
    If there is a call to action in this message, it is for governments to take a close look at how their national ICT policies, strategies, human capacity, and operations are meeting national objectives.  Prioritizing use of EA and supporting frameworks or standards will provide better guidance across government, and all steps taken within the framework will add value to the overall ICT capability.

Pacific-Tier Communications LLC provides consulting to governments and commercial organizations on topics related to data center consolidation, enterprise architecture, risk management, and cloud computing.

What Value Can I Expect from Cloud Computing Training?

Cloud Computing ClassroomNormally, when we think of technical-related training, images of rooms loaded with switches, routers, and servers might come to mind.    Cloud computing is different.  In reality, cloud computing is not a technology, but rather a framework employing a variety of technologies – most notably virtualization, to solve business problems or enable opportunities.

From our own practice, the majority of cloud training students represent non-technical careers and positions. Our training does follow the CompTIA Cloud Essentials course criterion, and is not a technical course, so the non-technical student trend should not come as any big surprise. 

What does come as a surprise is how enthusiastically our students dig into the topic.  Whether business unit managers, accounting and finance, sales staff, or executives, all students come into class convinced they need to know about cloud computing as an essential part of their future career progression, or even at times to ensure their career survival.

Our local training methodology is based on establishing an indepth knowledge of the NIST Cloud Definitions and Cloud Reference Architecture.  Once the students get beyond a perception such documents are too complex, and that we will refer nearly all aspects of training to both documents, we easily establish a core cloud computing knowledge base needed to explore both technical aspects, and more importantly practical aspects of how cloud computing is used in our daily lives, and likely future lives.

This is not significantly different than when we trained business users on how to use, employ, and exploit  the Internet in the 90s.  Those of us in engineering or technical operations roles viewed this type of training with either amusement or contempt, at times mocking those who did not share our knowledge and experience of internetworking, and ability to navigate the Internet universe.

We are in the same phase of absorbing and developing tacit knowledge of compute and storage access on demand, service-oriented architectures, Software as a Service, the move to a subscription-based application world.

Hamster Food as a Service (HFaaS)Those students who attend cloud computing training leave the class better able to engage in decision-making related to both personal and organizational information and communication technology, and less exposed to the spectrum of cloud washing, or marketing use of “cloud” and “XXX as a Service”  language overwhelming nearly all media on subjects ranging from hamster food to SpaceX and hyper loops.

Even the hardest core engineers who have degraded themselves to join a non-technical business-oriented cloud course walk away with a better view on how their tools support organizational agility (good jargon, no?), in addition to the potential financial impacts, reduced application development cycles, disaster recovery, business continuity, and all the other potential benefits to the organization when adopting cloud computing.

Some even walk away from the course planning a breakup with some of their favorite physical servers.

The Bottom Line

No student has walked away from a cloud computing course knowing less about the role, impact, and potential of implementing cloud in nearly any organization.  While the first few hours of class embrace a lot of great debates on the value of cloud computing, by the end of the course most students agree they are better prepared to consider, envision, evaluate, and address the potential or shortfalls of cloud computing.

Cloud computing is, and will continue to have influence on many aspects of our lives. It is not going away anytime soon.  The more we can learn, either through self-study or resident training, the better position we’ll be in to make intelligent decisions regarding the use and value of cloud in our lives and organizations.

Connecting at the Westin Building Exchange in Seattle

Seattle Washington - Home of WBXInternational telecommunication carriers all share one thing in common – the need to connect with other carriers and networks.  We want to make calls to China, a video conference in Moldova, send an email message for delivery within 5 seconds to Australia – all possible with our current state of global communications.  Magic?  Of course not.  While an abstract to most, the reality is telecommunications physical infrastructure extends to nearly every corner of the world, and communications carriers bring this global infrastructure together at  a small number of facilities strategically placed around the world informally called “carrier hotels.”

Pacific-Tier had the opportunity to visit the Westin Building Exchange (commonly known as the WBX), one of the world’s busiest carrier hotels, in early August.   Located in the heart of Seattle’s bustling business district, the WBX stands tall at 34 stories.  The building also acts as a crossroads of the Northwest US long distance terrestrial cable infrastructure, and is adjacent to trans-Pacific submarine cable landing points.

The world’s telecommunications community needs carrier hotels to interconnect their physical and value added networks, and the WBX is doing a great job in facilitating both physical interconnections between their more than 150 carrier tenants.

“We understand the needs of our carrier and network tenants” explained Mike Rushing,   Business Development Manager at the Westin Building.  “In the Internet economy things happen at the speed of light.  Carriers at the WBX are under constant pressure to deliver services to their customers, and we simply want to make this part of the process (facilitating interconnections) as easy as possible for them.”

Main Distribution Frame at WBXThe WBX community is not limited to carriers.  The community has evolved to support Internet Service Providers, Content Delivery Networks (CDNs), cloud computing companies, academic and research networks, enterprise customers, public colocation and data center operators, the NorthWest GigaPOP, and even the Seattle Internet Exchange Point (SIX), one of the largest Internet exchanges in the world.

“Westin is a large community system,” continued Rushing.  “As new carriers establish a point of presence within the building, and begin connecting to others within the tenant and accessible community, then the value of the WBX community just continues to grow.”

The core of the WBX is the 19th floor meet-me-room (MMR).  The MMR is a large, neutral, interconnection point for networks and carriers representing both US and international companies.  For example, if China Telecom needs to connect a customer’s headquarters in Beijing to an office in Boise served by AT&T, the actual circuit must transfer at a physical demarcation point from China Telecom  to AT&T.  There is a good chance that physical connection will occur at the WBX.

According to Kyle Peters, General Manager of the Westin Building, “we are supporting a wide range of international and US communications providers and carriers.  We fully understand the role our facility plays in supporting not only our customer’s business requirements, but also the role we play in supporting global communications infrastructure.”

You would be correct in assuming the WBX plays an important role in that critical US and global communications infrastructure.  Thus you would further expect the WBX to be constructed and operated in a manner providing a high level of confidence to the community their installed systems will not fail.

Lance Forgey, Director of Operations at the WBX, manages not only the MMR, but also the massive mechanical (air conditioning) and electrical distribution systems within the building.  A former submarine engineer, Forgey runs the Westin Building much like he operated critical systems within Navy ships.  Assisted by an experienced team of former US Navy engineers and US Marines, the facility presents an image of security, order, cleanliness, and operational attention to detail.

“Our operations and facility staff bring the discipline of many years in the military, adding innovation needed to keep up with our customer’s industries” said Forgey.  “Once you have developed a culture of no compromise on quality, then it is easy keep things running.”

That is very apparent when you walk through the site – everything is in its place, it is remarkably clean, and it is very obvious the entire site is the product of a well-prepared plan.

WBX GeneratorsOne area which stands out at the WBX is the cooling and electrical distribution infrastructure.  With space within adjacent external parking structures and additional areas outside of the building most heavy equipment is located outside of the building, providing an additional layer of physical security, and allowing the WBX to recover as much space within the building as possible for customer use.

“Power is not an issue for us”  noted Forgey.  “It is a limiting factor for much of our industry, however at the Westin Building we have plenty, and can add additional power anytime the need arises.”

That is another attraction for the WBX versus some of the other carrier hotels on the West Coast of the US.  Power in Washington State averages around $0.04/kWH, while power in California may be nearly three times as expensive.

“In addition to having all the interconnection benefits similar operations have on the West Coast, the WBX can also significantly lower operating costs for tenants” added Rushing.  As the cost of power is a major factor in data center operations, reducing the cost of operations through a significant reduction in the cost of power is a big issue.

The final area carrier hotels need to address is the ever changing nature of communications, including interconnections between members of the WBX community.  Nothing is static, and the WBX team is constantly communicating with tenants, evaluating changes in supporting technologies, and looking for ways to ensure they have the tools available to meet their rapidly changing environments.

Cloud computing, software-defined networking, carrier Ethernet – all  topics which require frequent communication with tenants to gain insight into their visions, concerns, and plans.  The WBX staff showed great interest in cooperating with their tenants to ensure the WBX will not impede development or implementation of new  technologies, as well as attempt to stay ahead of their customer deployments.

“If a customer comes to us and tells us they need a new support infrastructure or framework with very little lead time, then we may not be able to respond quickly enough to meet their requirements” concluded Rushing.  “Much better to keep an open dialog with customers and become part of their team.”

Pacific-Tier has visited, and evaluated dozens of data centers during the past four years.  Some have been very good, some have been very bad.  Some have gone over the edge in data center deployments, chasing the “grail” of a Tier IV data center certification, while some have been little more than a server closet.

The Westin Building (WBX)The Westin Building / WBX is unique in the industry.  Owned by both Clise Properties of Seattle and Digital Realty Trust,  the Westin Building brings the best of both the real estate world and data centers into a single operation.  The quality of mechanical and electrical infrastructure, the people maintaining the infrastructure, and the vision of the company give a visitor an impression that not only is the WBX a world-class facility, but also that all staff and management know their business, enjoy the business, and put their customers on top as their highest priority.

As Clise Properties owns much of the surrounding land, the WBX has plenty of opportunity to grow as the business expands and changes.  “We know cloud computing companies will need to locate close to the interconnection points, so we better be prepared to deliver additional high-density infrastructure as their needs arise” said Peters.  And in fact Clise has already started planning for their second colocation building.  This building, like its predecessor, will be fully interconnected with the Westin Building, including virtualizing the MMR distribution frames in each building into a single cross interconnection environment.

Westin WBX LogoWBX offers the global telecom industry an alternative to other carrier hotels in Los Angeles and San Francisco. One shortfall in the global telecom industry are the “single threaded” links many have with other carriers in the global community.  California has the majority of North America / Asia carrier interconnections today, but all note California is one of the world’s higher risk options for building critical infrastructure, with the reality it is more a matter of “when” than “if” a catastrophic event such as an earthquake occurs which could seriously disrupt international communications passing through one of the region’s MMRs.

The telecom industry needs to have the option of alternate paths of communications and interconnection points.  While the WBX stands tall on its own as a carrier hotel and interconnection site, it is also the best alternative and diverse landing point for trans-Pacific submarine cable capacity – and subsequent interconnections.

The WBX offers a wide range of customer services, including:

  • Engineering support
  • 24×7 Remote hands
  • Fast turn around for interconnections
  • Colocation
  • Power circuit monitoring and management
  • Private suites and lease space for larger companies
  • 24×7 security monitoring and access control

Check out the Westin Building and WBX the next time you are in Seattle, or if you want to learn more about the telecom community revolving and evolving in the Seattle area.  Contact Mike Rushing at mrushing@westinbldg.com for more information.

 

Data Center Consolidation and Adopting Cloud Computing in 2013

Throughout 2012 large organizations and governments around the world continued to struggle with the idea of consolidating inefficient data centers, server closets, and individual “rogue” servers scattered around their enterprise or government agencies.  Issues dealt with the cost of operating data centers, disaster management of information technology resources, and of course human factors centered on control, power, or retention of jobs in a rapidly evolving IT industry.

Cloud computing and virtualization continue to have an impact on all consolidation discussions, not only from the standpoint of providing a much better model for managing physical assets, but also in the potential cloud offers to solve disaster recovery shortfalls, improve standardization, and encourage or enable development of service-oriented architectures.

Our involvement in projects ranging from local, state, and national government levels in both the United States and other countries indicates a consistent need for answering the following concerns:

  • Existing IT infrastructure, including both IT and facility, is reaching the end of its operational life
  • Collaboration requirements between internal and external users are expanding quickly, driving an architectural need for interoperability
  • Decision support systems require access to both raw data, and “big data/archival data”

We would like to see an effort within the IT community to move in the following directions:

  1. Real effort at decommissioning and eliminating inefficient data centers
  2. All data and applications should be fit into an enterprise architecture framework – regardless of the size of organization or data
  3. Aggressive development of standards supporting interoperability, portability, and reuse of objects and data

Regardless of the very public failures experienced by cloud service providers over the past year, the reality is cloud computing as an IT architecture and model is gaining traction, and is not likely to go away any time soon.  As with any emerging service or technology, cloud services will continue to develop and mature, reducing the impact and frequency of failures.

Future Data CentersWhy would an organization continue to buy individual high powered workstations, individual software licenses, and device-bound storage when the same application can be delivered to a simple display, or wide variety of displays, with standardized web-enabled cloud (SaaS) applications that store mission critical data images on a secure storage system at a secure site?  Why not facilitate the transition from CAPEX to OPEX, license to subscription, infrastructure to product and service development?

In reality, unless an organization is in the hardware or software development business, there is very little technical justification for building and managing a data center.  This includes secure facilities supporting military or other sensitive sites.

The cost of building and maintaining a data center, compared with either outsourcing into a commercial colocation site – or virtualizing data, applications, and network access requirements has gained the attention of CFOs and CEOs, requiring IT managers to more explicitly justify the cost of building internal infrastructure vs. outsourcing.  This is quickly becoming a very difficult task.

Money spent on a data center infrastructure is lost to the organization.  The cost of labor is high, the cost of energy, space, and maintenance is high.  Mooney that could be better applied to product and service development, customer service capacity, or other revenue and customer-facing activities.

The Bandwidth Factor

The one major limitation the IT community will need to overcome as data center consolidation continues and cloud services become the ‘norm, is bandwidth.  Applications, such as streaming video, unified communications, and data intensive applications will need more bandwidth.  The telecom companies are making progress, having deployed 100gbps backbone capacity in many markets.  However this capacity will need to continue growing quickly to meet the needs of organizations needing to access data and applications stored or hosted within a virtual or cloud computing environment.

Consider a national government’s IT requirements.  If the government, like most, are based within a metro area.  The agencies and departments consolidate their individual data centers and server closets into a central or reduced number of facilities.   Government interoperability frameworks begin to make small steps allowing cross-agency data sharing, and individual users need access to a variety of applications and data sources needed to fulfill their decision support requirements.

For example, a GIS (Geospatial/Geographic Information System) with multiple demographic or other overlays.  Individual users will need to display data that may be drawn from several data sources, through GIS applications, and display a large amount of complex data on individual display screens.  Without broadband access between both the user and application, as well as application and data sources, the result will be a very poor user experience.

Another example is using the capabilities of video conferencing, desktop sharing, and interactive persistent-state application sharing.  Without adequate bandwidth this is simply not possible.

Revisiting the “4th Utility” for 2013

The final vision on the 2013 “wishlist” is that we, as an IT industry, continue to acknowledge the need for developing the 4th Utility.  This is the idea that broadband communications, processing capacity (including SaaS applications), and storage is the right of all citizens.  Much like the first three utilities, roads, water, and electricity, the 4th Utility must be a basic part of all discussions related to national, state, or local infrastructure discussions.  As we move into the next millennium, Internet-enabled, or something like Internet-enabled communications will be an essential part of all our lives.

The 4th Utility requires high capacity fiber optic infrastructure and broadband wireless be delivered to any location within the country which supports a community or individual connected to a community.   We’ll have to [pay a fee to access the utility (same as other utilities), but it is our right and obligation to deliver the utility.

2013 will be a lot of fun for us in the IT industry.  Cloud computing is going to impact everybody – one way or the other.  Individual data centers will continue to close.  Service-oriented architectures, enterprise architecture, process modeling, and design efficiency will drive a lot of innovation.   – We’ll lose some players, gain players, and and we’ll be in a better position at the end of 2013 than today.

Gartner Data Center Conference Yields Few Surprises

Gartner’s 2012 Data Center Conference in Las Vegas is noted for  not yielding any major surprise.  While having an uncanny number of attendees (*the stats are not available, however it is clear they are having a very good conference), most of the sessions appear to be simply reaffirming what everybody really knows already, serving to reinforce the reality data center consolidation, cloud computing, big data, and the move to an interoperable framework will be part of everybody’s life within a few years.

Childs at Gartner ConferenceGartner analyst Ray Paquet started the morning by drawing a line at the real value of server hardware in cloud computing.  Paquet stressed that cloud adopters should avoid integrated hardware solutions based on blade servers, which carry a high margin, and focus their CAPEX on cheaper “skinless” servers.  Paquet emphasized that integrated solutions are a “waste of money.”

Cameron Haight, another Gartner analyst, fired a volley at the process and framework world, with a comparison of the value DevOps brings versus ITIL.  Describing ITIL as a cumbersome burden to organizational agility, DevOps is a culture-changer that allows small groups to quickly respond to challenges.  Haight emphasized the frequently stressful relationship between development organizations and operations organizations, where operations demands stability and quality, and development needs freedom to move projects forward, sometimes without the comfort of baking code to the standards preferred by operations – and required by frameworks such as ITIL.

Haight’s most direct slide described De Ops as being “ITIL minus CRAP.”  Of course most of his supporting slides for moving to DevOps looked eerily like an ITIL process….

Other sessions attended (by the author) included “Shaping Private Clouds,” a WIPRO product demonstration, and a data center introduction by Raging Wire.  All valuable introductions for those who are considering making a major change in their internal IT deployments, but nothing cutting edge or radical.

The Raging Wire data center discussion did raise some questions on the overall vulnerability of large box data centers.  While it is certainly possible to build a data center up to any standard needed to fulfill a specific need, the large data center clusters in locations such as Northern Virginia are beginning to appear very vulnerable to either natural, human, or equipment failure disruptions.  In addition to fulfilling data center tier classification models as presented by the Uptime Institute, it is clear we are producing critical national infrastructure which if disrupted could cause significant damage to the US economy or even social order.

Eventually, much like the communications infrastructure in the US, data centers will need to come under the observation or review of a national agency such as Homeland Security.  While nobody wants a government officer in the data center, protection of national infrastructure is a consideration we probably will not be able to avoid for long.

Raging Wire also noted that some colocation customers, particularly social media companies, are hitting up to 8kW per cabinet.  Also scary if true, and in extended deployments.  This could result in serious operational problems if cooling systems were disrupted, as the heat generated in those cabinets will quickly become extreme.  Would also be interesting if companies like Raging Wire and other colocation companies considered developing a real time CFD monitor for their data center floors allowing better monitoring and predictability than simple zone monitoring solutions.

The best presentation of the day came at the end, “Big Data is Coming to Your Data Center.”  Gartner’s Sheila Childs brought color and enthusiasm to a topic many consider, well, boring.  Childs was able to bring the value, power, and future of big data into a human consumable format that kept the audience in their seats until the end of session at 6 p.m. in the late afternoon.

Childs hit on concepts such as “dark data” within organizations, the value of big data in decision support systems (DSS), and the need for developing and recruiting skilled staff who can actually write or build the systems needed to fully exploit the value of big data.  We cannot argue that point, and can only hope our education system is able to focus on producing graduates with the basic skills needed to fulfill that requirement.

5 Data Center Technology Predictions for 2012

2011 was a great year for technology innovation.  The science of data center design and operations continued to improve, the move away from mixed-use buildings used as data centers continued, the watts/sqft metric took a second seat to overall kilowatts available to a facility or customer, and the idea of compute capacity and broadband as a utility began to take its place as a basic right of citizens.

However, there are 5 areas where we will see additional significant advances in 2012.

1.  Data Center Consolidation.  The US Government admits it is using only 27% of its overall available compute power.  With 2094 data centers supporting the federal government (from the CIO’s 25 Point Plan  to Reform Fed IT Mgt), the government is required to close at least 800 of those data centers by 2015.

Data Center ConstructionThe lesson is not lost on state and local governments, private industry, or even internet content providers.  The economics of operating a data center or server closet, whether in costs of real estate, power, hardware, in addition to service and licensing agreements, are compelling enough to make even the most fervent server-hugger reconsider their religion.

2.  Cloud Computing.  Who doesn’t believe cloud computing will eventually replace the need for a server closets, cabinets, or even small cages in data centers?  The move to cloud computing is as certain as the move to email was in the 1980s. 

Some IT managers and data owners hate the idea of cloud computing, enterprise service busses, and consolidated data.  Not so much an issue of losing control, but in many cases because it brings transparency to their operation.  If you are the owner of data in a developing country, and suddenly everything you do can be audited by a central authority – well it might make you uncomfortable…

A lesson learned while attending a  fast pitch contest during late 2009 in Irvine, CA…  An enterprising entrepreneur gave his “pitch” to a panel of investment bankers and venture capital representatives.  He stated he was looking for a $5 million investment in his startup company. 

A panelist asked what the money was for, and the entrepreneur stated “.. and $2 million to build out a data center…”  The panelist responded that 90% of new companies fail within 2 years.  Why would he want to be stuck with the liability of a data center and hardware if the company failed? The gentleman further stated, “don’t waste my money on a data center – do the smart thing, use the Amazon cloud.”

3.  Virtual Desktops and Hosted Office Automation.  How many times have we lost data and files due to a failed hard drive, stolen laptop, or virus disrupting our computer?  What is the cost or burden of keeping licenses updated, versions updated, and security patches current in an organization with potentially hundreds of users?  What is the lead time when a user needs a new application loaded on a computer?

From applications as simple as Google Docs, to Microsoft 365, and other desktop replacement applications suites, users will become free from the burden of carrying a heavy laptop computer everywhere they travel.  Imagine being able to connect your 4G/LTE phone’s HDMI port to a hotel widescreen television monitor, and be able to access all the applications normally used at a desktop.  You can give a presentation off your phone, update company documents, or nearly any other IT function with the only limitation being a requirement to access broadband Internet connections (See # 5 below).

Your phone can already connect to Google Docs and Microsoft Live Office, and the flexibility of access will only improve as iPads and other mobile devices mature.

The other obvious benefit is files will be maintained on servers, much more likely to be backed up and included in a disaster recovery plan.

4.  The Science of Data Centers.  It has only been a few years since small hosting companies were satisfied to go into a data center carved out of a mixed-use building, happy to have access to electricity, cooling, and a menu of available Internet network providers.  Most rooms were Data Center Power Requirementsdesigned to accommodate 2~3kW per cabinet, and users installed servers, switches, NAS boxes, and routers without regard to alignment or power usage.

That has changed.  No business or organization can survive without a 24x7x265 presence on the Internet, and most small enterprises – and large enterprises, are either consolidating their IT into professionally managed data centers, or have already washed their hands of servers and other IT infrastructure.

The Uptime Institute, BICSI, TIA, and government agencies have begun publishing guidelines on data center construction providing best practices, quality standards, design standards, and even standards for evaluation.  Power efficiency using metrics such as the PUE/DCiE provide additional guidance on power management, data center management, and design. 

The days of small business technicians running into a data center at 2 a.m. to install new servers, repair broken servers, and pile their empty boxes or garbage in their cabinet or cage on the way out are gone.  The new data center religion is discipline, standards, discipline, and security. 

Electricity is as valuable as platinum, just as cooling and heat are managed more closely than inmates at San Quentin.  While every other standards organization is now offering certification in cabling, data center design, and data center management, we can soon expect universities to offer an MS or Ph.D in data center sciences.

5.  The 4th Utility Gains Traction.  Orwell’s “1984” painted a picture of pervasive government surveillance, and incessant public mind control (Wikipedia).  Many people believe the Internet is the source of all evil, including identity theft, pornography, crime, over-socialization of cultures and thoughts, and a huge intellectual time sink that sucks us into the need to be wired or connected 24 hours a day.

Yes, that is pretty much true, and if we do not consider the 1000 good things about the Internet vs. each 1 negative aspect, it might be a pretty scary place to consider all future generations being exposed and indoctrinated.  The alternative is to live in a intellectual Brazilian or Papuan rain forest, one step out of the evolutionary stone age.

The Internet is not going away, unless some global repressive government, fundamentalist religion, or dictator manages to dismantle civilization as we know it.

The 4th utility identifies broadband access to the ‘net as a basic right of all citizens, with the same status as roads, water, and electricity.  All governments with a desire to have their nation survive and thrive in the next millennium will find a way to cooperate with network infrastructure providers to build out their national information infrastructure (haven’t heard that term since Al Gore, eh?).

Without a robust 4th utility, our children and their children will produce a global generation of intellectual migrant workers, intellectual refugees from a failed national information sciences vision and policy.

2012 should be a great year.  All the above predictions are positive, and if proved true, will leave the United States and other countries with stronger capacities to improve their national quality of life, and bring us all another step closer.

Happy New Year!

5 Cloud Computing Predictions for 2011

  1. ESBaaS Will Emerge in Enterprise Clouds.  Enterprise service bus as a service will begin to emerge within enterprise clouds to allow common messaging within applications among different organizational units.  This will further support standardization within an enterprise, as well as reduce lead times for applications development.
  2. Enterprise Cloud Computing will Accelerate Data Center Consolidation.  As enterprises and governments continue to deal with the cost of operating individual data centers, consolidation will become a much more important topic.  As the consolidation process is planned, further migration to cloud computing and virtualized environments will become very attractive – if not critical – to all organizations.
  3. Desktop Virtualization.   As we become more comfortable with Google Apps, Microsoft Office 365, and other desktop replacement environments, the need for high-powered desktop workstations will be reduced to power users.  In addition to the obvious attraction for better data protection and disaster recovery, the cost of expensive workstations and local application licenses makes little sense.  The first migration will be for those who are primarily connected via an organizational LAN, with road warriors and mobile users following as broadband becomes more ubiquitous.
  4. SME Data Center Outsourcing into Public Clouds.  Small companies  requiring routine data center support, including office automation, servers, finance applications, and web presence, will find it difficult to justify installing their own equipment in a private or public colocation center.  In fact, it is unlikely savvy investors will support start up companies planning to operate their own data center, unless they are in an industry considered a very clear exception to normal IT requirements.
  5. Cloud Computing and Cloud Storage will Look to PODs and Containers.  Microsoft and Google have proven the concept on a large scale, now the rest of the cloud computing and data center industry will take notice and begin to consider compute and storage capacity as a utility.  As a utility the compute, storage, switching, and communications components will take advantage of greater efficiencies and design flexibility of moving beyond the traditional data center concrete.  This will further support the idea of distributed cloud computing, portability, cloud exchanges, and cloud spot markets in 2012…

Cloud Computing Wish List for 2011

2010 was a great year for cloud computing.  The hype phase of cloud computing is closing in on maturity, as the message has finally hit awareness of nearly all in the Cxx tier.  And for good reason.  The diffusion of IT-everything into nearly every aspect of our lives needs a lot of compute, storage, and network horsepower.

imageAnd,… we are finally getting to the point cloud computing is no longer explained with exotic diagrams on a white board or Powerpoint presentation, but actually something we can start knitting together into a useful tool.

The National Institute of Standards and Technology (NIST) in the United States takes cloud computing seriously, and is well on the way to setting standards for cloud computing, at least in the US.  The NIST definitions of cloud computing are already an international reference, and as that taxonomy continues to baseline vendor cloud solutions, it is a good sign we are  on the way to product maturity.

Now is the Time to Build Confidence

Unless you are an IY manager in a bleeding-edge technology company, there is rarely any incentive to be in the first-mover quadrant of technology implementation.  The intent of IT managers is to keep the company’s information secure, and provide the utilities needed to meet company objectives.  Putting a company at risk by implementing “cool stuff” is not the best career choice.

However, as cloud computing continues to mature, and the cost of operating an internal data center continues to rise (due to the cost of electricity, real estate, and equipment maintenance), IT managers really have no choice – they have to at least learn the cloud computing technology and operations environment.  If for no other reason than their Cxx team will eventually ask the question of “what does this mean to our company?”

An IT manager will need to prepare an educated response to the Cxx team, and be able to clearly articulate the following:

  • Why cloud computing would bring operational or competitive advantage to the company
  • Why it might not bring advantage to the company
  • The cost of operating in a cloud environment versus a traditional data center environment
  • The relationship between data center consolidation and cloud computing
  • The advantage or disadvantage of data center outsourcing and consolidation
  • The differences between enterprise clouds, public clouds, and hybrid clouds
  • The OPEX/CAPEX comparisons of running individual servers versus virtualization, or virtualization within a cloud environment
  • Graphically present and describe cloud computing models compared to traditional models, including the cost of capacity

Wish List Priority 1 – Cloud Computing Interoperability

It is not just about vendor lock-in.  it is not just about building a competitive environment.  it is about having the opportunity to use local, national, and international cloud computing resources when it is in the interest of your organization.

Hybrid clouds are defined by NIST, but in reality are still simply a great idea.  The idea of being able to overflow processing from an enterprise cloud to a public cloud is well-founded, and in fact represents one of the basic visions of cloud computing.  Processing capacity on demand.

But let’s take this one step further.  The cloud exchange.  We’ve discussed this for a couple of years, and now the technology needs to catch up with the concept.

If we can have an Internet Exchange, a Carrier Ethernet Exchange, and a telephone exchange – why can’t we have a Cloud Exchange?  or a single one-stop-shop for cloud compute capacity consumers to go to access a spot market for on-demand cloud compute resources?

Here is one idea.  Take your average Internet Exchange Point, like Amsterdam (AMS-IX), Frankfurt (DE-CIX), Any2, or London (LINX) where hundreds of Internet networks, content delivery networks, and enterprise networks come together to interconnect at a single point.  This is the place where the only restriction you have for interconnection of networks and resources is the capacity of your port/s connecting you to the exchange point.

Most Internet Exchange Points are colocated with large data centers, or are in very close proximity to large data centers (with a lot of dark fiber connecting the facilities).  The data centers manage most of the large content delivery networks (CDNs) facing the Internet.  Many of those CDNs have irregular capacity requirements based on event-driven, seasonal, or other activities.

The CDN can either build their colocation capacity to meet the maximum forecast requirements of their product, or they could potentially interconnect with a colocated cloud computing company for overflow capacity – at the point of Internet exchange.

The cloud computing companies (with the exception of the “Big 3”), are also – yes, in the same data centers as the CDNs.  Ditto for the enterprise networks choosing to either outsource their operations into a data center – or outsource into a public cloud provider.

Wish List – Develop a cloud computing exchange colocated, or part of large Internet Exchange Points.

Wish List Extra Credit – Switch vendors develop high capacity SSDs that fit into switch slots, making storage part of the switch back plane.

Simple and Secure Disaster Recovery Models

Along with the idea of distributed cloud processing, interoperability, and on-demand resources comes the most simple of all cloud visions – disaster recovery.

One of the reasons we all talk cloud computing is the potential for data center consolidation and recovery of CAPEX/OPEX for reallocation into development and revenue-producing activities.

However, with data center consolidation comes the equally important task of developing strong disaster recovery and business continuity models.  Whether it be through producing hot standby images of applications and data, simply backing up data into a remote (secure) location, or both, disaster recovery still takes on a high priority for 2011.

You might state “disaster recovery has been around since the beginning of computing, with 9 track tapes copies and punch cards – what’s new?”

What’s new is the reality of disaster recovery is most companies and organizations still have no meaningful disaster recovery plan.  There may be a weekly backup to tape or disk, there may even be the odd company or organization with a standby capability that limits recovery time and recovery point objectives to a day or two.  But let’s be honest – those are the exceptions.

Having surveyed enterprise and government users over the past two years, we have noticed that very, very few organizations with paper disaster recovery plans actually implement their plans in practice.  This includes many local and state governments within the US (check out some of the reports published by the National Association of State CIOs/NASCIO if you don’t believe this statement!).

Wish List Item 2 – Develop a simple, really simple and cost effective disaster recovery model within the cloud computing industry.  Make it an inherent part of all cloud computing products and services.  Make it so simple no IT manager can ever again come up with an excuse why their recovery point and time objectives are not ZERO.

Moving Towards the Virtual Desktop

Makes sense.  If cloud computing brings applications back to the SaaS model, and communications capacity and bandwidth are bringing delays –even on long distance connections, to the point us humans cannot tell if we are on a LAN or a WAN, then let’s start dumping high cost works stations.

Sure, that 1% of the IT world using CAD, graphics design, and other funky stuff will still need the most powerful computer available on the market, but the rest of us can certainly live with hosted email, other unified communications, and office automation applications.  You start your dumb terminal with the 30” screen at 0800, and log off at 1730.

If you really need to check email at night or on the road, your 3G->4G smart phone or netbook connection will provide more than adequate bandwidth to connect to your host email application or files.

This supports disaster recovery objectives, lowers the cost of expensive workstations, and allows organizations to regain control of their intellectual property.

With applications portability, at this point it makes no difference if you are using Google Apps, Microsoft 365, or some other emerging hosted environment.

Wish List Item 3 – IT Managers, please consider dumping the high end desktop workstation, gain control over your intellectual property, recover the cost of IT equipment, and standardize your organizational environment.

More Wish List Items

Yes, there are many more.  But those start edging towards “cool.”  We want to concentrate on those items really needed to continue pushing the global IT community towards virtualization.

%d bloggers like this: