SDNs in the Carrier Hotel

SDN_interconnections Carrier hotels are an integral part of global communications infrastructure.  The carrier hotel serves a vital function, specifically the role of a common point of interconnection between facility-based (physical cable in either terrestrial, submarine, or satellite networks) carriers, networks, content delivery networks (CDNs), Internet Service Providers (ISPs), and even private or government networks and hosting companies.

In some locations, such as the One Wilshire Building in Los Angeles, or 60 Hudson in New York, several hundred carriers and service providers may interconnect physically within a main distribution frame (MDF), or virtually through interconnections at Internet Exchange Points (IXPs) or Ethernet Exchange points.

Carrier hotel operators understand that technology is starting to overcome many of the traditional forms of interconnection.  With 100Gbps wavelengths and port speeds, network providers are able to push many individual virtual connections through a single interface, reducing the need for individual cross connections or interconnections to establish customer or inter-network circuits.

While connections, including internet peering and VLANs have been available for many years through IXPs and use of circuit multiplexing, software defined networking (SDNs) are poised to provide a new model of interconnections at the carrier hotel, forcing not only an upgrade of supporting technologies, but also reconsideration of the entire model and concept of how the carrier hotel operates.

Several telecom companies have announced their own internal deployments of order fulfillment platforms based on SDN, including PacNet’s PEN and Level 3’s (originally Time Warner) pilot test at DukeNet, proving that circuit design and provisioning can be easily accomplished through SDN-enabled orchestration engines.

However inter-carrier circuit or service orchestration is still not yet in common use at the main carrier hotels and interconnection points.

Taking a closer look at the carrier hotel environment we will see an opportunity based on a vision which considers that if the carrier hotel operator provides an orchestration platform which allows individual carriers, networks, cloud service providers, CDNs, and other networks to connect at a common point, with standard APIs to allow communication between different participant network or service resources, then interconnection fulfillment may be completed in a matter of minutes, rather than days or weeks as is the current environment.

This capability goes even a step deeper.  Let’s say carrier “A” has an enterprise customer connected to their network.  The customer has an on-demand provisioning arrangement with Carrier “A,” allowing the customer to establish communications not only within Carrier”A’s” network resources, but also flow through the carrier hotel’s interconnection broker into say, a cloud service provider’s network.  The customer should be able to design and provision their own solutions – based on availability of internal and interconnection resources available through the carrier.

Participants will announce their available resources to the carrier hotel’s orchestration engine (network access broker), and those available resources can then be provisioned on-demnd by any other participant (assuming the participants have a service agreement or financial accounting agreement either based on the carrier hotel’s standard, or individual service agreements established between individual participants.

If we use NIST’s characteristics of cloud computing as a potential model, then the carrier hotels interconnection orchestration engine should ultimately provide participants:

  • On-demand self-service provisioning
  • Elasticity, meaning short term usage agreements, possibly even down to the minute or hour
  • Resource pooling, or a model similar to a spot market (in competing markets where multiple carriers or service providers may be able to provide the same service)
  • Measured service (usage based or usage-sensitive billing  for service use)
  • And of course broad network access – currently using either 100gbps or multiples of 100gbps (until 1tbps ports become available)

While layer 1 (physical) interconnection of network resources will always be required – the bits need to flow on fiber or wireless at some point, the future of carrier and service resource intercommunications must evolve to accept and acknowledge the need for user-driven, near real time provisioning of network and other service resources, on a global scale.

The carrier hotel will continue to play an integral role in bringing this capability to the community, and the future is likely to be based on software driven , on-demand meet-me-rooms.

PTC 2015 Wraps Up with Strong Messages on SDNs and Automation

Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.

Carrier SDNs SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.

While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.

Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.

Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards.  In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:

  • How to differentiate their service offering
  • How to differentiate their operations environment
  • How to ensure industry-acceptable delivery and provisioning time cycles
  • How to deal with legacy deployments

Alexander also emphasized that as an industry we need to get away from physical wiring when possible.   With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.

The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire).  Some examples of actual and required deployments included:

  • A bandwidth on-demand marketplace
  • Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
  • Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services

Robust discussions on standards also spawned debated.  With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards.  Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.

During the PTC 2015 session entitled  “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:

  • Open Contrail
  • Open Daylight
  • Open Stack
  • Open Flow
  • OPNFV
  • ONOS
  • OvS
  • Project Floodlight
  • Open Networking
  • and on and on….

For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service  marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.

For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.

The connection and integration of cloud services with telecom or network services is quite clear.  At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.

While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do.  In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”

Oh my…

PTC 2015 Focuses on Submarine Cables and SDNs

PTC 2015 In an informal survey of words used during seminars and discussions, two main themes are emerging at the Pacific Telecommunications Council’s 2015 annual conference.  The first, as expected, is development of more submarine cable capacity both within the Pacific, as well as to end points in ANZ, Asia, and North America.  The second, software defined networking (SDN), as envisioned could quickly begin to re-engineer the gateway and carrier hotel interconnection business.

New cable development, including Arctic Fiber, Trident, SEA-US, and APX-E have sparked a lot of interest.  One discussion at Sunday morning’s Submarine Cable Workshop highlighted the need for Asian (and other regions) need to find ways to bypass the United States, not just for performance issues, but also to avoid US government agencies from intercepting and potentially exploiting data hitting US networks and data systems.

The bottom line with all submarine cable discussions is the need for more, and more, and more cable capacity.  Applications using international communications capacity, notably video, are consuming at rates which are driving fear the cable operators won’t be able to keep up with capacity demands.

However perhaps the most interesting, and frankly surprising development is with SDNs in the meet me room (MMR).  Products such as PacNet’s PEN (PacNet Enabled Network) are finally putting reality into on-demand, self-service circuit provisioning, and soon cloud computing capacity provisioning within the MMR.  Demonstrations showed how a network, or user, can provisioning from 1Mbps to 10Gbps point to point within a minute.

In the past on demand provisioning of interconnections was limited to Internet Exchange Points.  Fiber cross connects, VLANs, and point to point Ethernet connections.  Now, as carrier hotels and MMRs acknowledge the need for rapid provisioning of elastic (rapid addition and deletion of bandwidth or capacity) resources, the physical cross connect and IXP peering tools will not be adequate for market demands in the future.

SDN models, such as PacNet’s PEN, are a very innovative step towards this vision.  The underlying physical interconnection infrastructure simply becomes a software abstraction for end users (including carriers and networks) allowing circuit provisioning in a matter of minutes, rather than days.

The main requirement for full deployment is to “sell” carriers and networks on the concept, as key success factors will revolve around the network effect of participant communities.  Simply, the more connecting and participating networks within the SDN “community,” the more value the SDN MMR brings to a facility or market.

A great start to PTC 2015.  More PTC 2015 “sidebars” on Tuesday.

You Want Money for a Data Center Buildout?

Yield to Cloud A couple years ago I attended several “fast pitch” competitions and events for entrepreneurs in Southern California, all designed to give startups a chance to “pitch” their ideas in about 60 seconds to a panel of representatives from the local investment community.  Similar to television’s “Shark Tank,” most of the ideas pitches were harshly critiqued, with the real intent of assisting participating entrepreneurs in developing a better story for approaching investors and markets.

While very few of the pitches received a strong, positive response, I recall one young guy who really set the panel back a step in awe.  The product was related to biotech, and the panel provided a very strong, positive response to the pitch.

Wishing to dig a bit deeper, one of the panel members asked the guy how much money he was looking for in an investment, and how he’d use the money.

“$5 million he responded,” with a resounding wave of nods from the panel.  “I’d use around $3 million for staffing, getting the office started, and product development.”  Another round of positive expressions.  “And then we’d spend around $2 million setting up in a data center with servers, telecoms, and storage systems.”

This time the panel looked as if they’d just taken a crisp slap to the face.  After a moment of collection, the panel spokesman launched into a dress down of the entrepreneur stating “I really like the product, and think you vision is solid.  However, with a greater then 95% chance of your company going bust within the first year, I have no desire to be stuck with $2 million worth of obsolete computer hardware, and potentially contract liabilities once you shut down your data center.  You’ve got to use your head and look at going to Amazon for your data center capacity and forget this data center idea.”

Now it was the entire audience’s turn to take a pause.

In the past IT managers really placed buying and controlling their own hardware, in their own facility, as a high priority – with no room for compromise.  For perceptions of security, a desire for personal control, or simply a concern that outsourcing would limit their own career potential, sever closets and small data centers were a common characteristic of most small offices.

At some point a need to have proximity to Internet or communication exchange points, or simple limitations on local facility capacity started forcing a migration of enterprise data centers into commercial colocation.  For the most part, IT managers still owned and controlled any hardware outsourced into the colocation facility, and most agreed that in general colocation facilities offered higher uptime, fewer service disruptions, and good performance, in particular for eCommerce sites.

Now we are at a new IT architecture crossroads.  Is there really any good reason for a startup, medium, or even large enterprise to continue operating their own data center, or even their own hardware within a colocation facility?  Certainly if the average CFO or business unit manager had their choice, the local data center would be decommissioned and shut down as quickly as possible.  The CAPEX investment, carrying hardware on the books for years of depreciation, lack of business agility, and dangers of business continuity and disaster recovery costs force the question of “why don’t we just rent IT capacity from a cloud service provider?”

Many still question the security of public clouds, many still question the compliance issues related to outsourcing, and many still simply do not want to give up their “soon-to-be-redundant” data center jobs.

Of course it is clear most large cloud computing companies have much better resources available to manage security than a small company, and have made great advances in compliance certifications (mostly due to the US government acknowledging the role of cloud computing and changing regulations to accommodate those changes).  If we look at the US Government’s FedRAMP certification program as an example, security, compliance, and management controls are now a standard – open for all organizations to study and adopt as appropriate.

So we get back to the original question, what would justify a company in continuing to develop data centers, when a virtual data center (as the first small step in adopting a cloud computing architecture) will provide better flexibility, agility, security, performance, and lower cost than operating a local of colocated IT physical infrastructure?  Sure, exceptions exist, including some specialized interfaces on hardware to support mining, health care, or other very specialized activities.  However if you re not in the computer or switch manufacturing business – can you really continue justifying CAPEX expenditures on IT?

IT is quickly becoming a utility.  As a business we do not plan to build roads, build water distribution, or build our own power generation plants.  Compute, telecom, and storage resources are becoming a utility, and IT managers (and data center / colocation companies) need to do a comprehensive review of their business and strategy, and find a way to exploit this technology reality, rather than allow it to pass us by.

Developing a New “Service-Centric IT Value Chain”

imageAs IT professionals we have been overwhelmed with different standards for each component of architecture, service delivery, governance, security, and operations.  Not only does IT need to ensure technical training and certification, but it is also desired to pursue certifications in ITIL, TOGAF, COBIT, PMP, and a variety of other frameworks – at a high cost in both time and money.

Wouldn’t it be nice to have an IT framework or reference architecture which brings all the important components of each standard or recommendation into a single model which focuses on the most important aspect of each existing model?

The Open Group is well-known for publishing TOGAF (The Open Group Architecture Framework), in addition to a variety of other standards and frameworks related to Service-Oriented Architectures (SOA), security, risk, and cloud computing.  In the past few years, recognizing the impact of broadband, cloud computing, SOAs, and need for a holistic enterprise architecture approach to business and IT, publishing many common-sense, but powerful recommendations such as:

  • TOGAF 9.1
  • Open FAIR (Risk Analysis and Assessment)
  • SOCCI (Service-Oriented Cloud Computing Infrastructure)
  • Cloud Computing
  • Open Enterprise Security Architecture
  • Document Interchange Reference Model (for interoperability)
  • and others.

The open Group’s latest project intended to streamline and focus IT systems development is called the “IT4IT” Reference Architecture.  While still in the development, or “snapshot” phase, IT4IT is surprisingly easy to read, understand, and most importantly logical.

“The IT Value Chain and IT4IT Reference Architecture represent the IT service lifecycle in a new and powerful way. They provide the missing link between industry standard best practice guides and the technology framework and tools that power the service management ecosystem. The IT Value Chain and IT4IT Reference Architecture are a new foundation on which to base your IT operating model. Together, they deliver a welcome blueprint for the CIO to accelerate IT’s transition to becoming a service broker to the business.” (Open Group’s IT4IT Reference Architecture, v 1.3)

The IT4IT Reference Architecture acknowledges changes in both technology and business resulting from the incredible impact Internet and automation have had on both enterprise and government use of information and data.  However the document also makes a compelling case that IT systems, theory, and operations have not kept up with either existing IT support technologies, nor the business visions and objectives IT is meant to serve.

IT4IT’s development team is a large, global collaborative effort including vendors, enterprise, telecommunications, academia, and consulting companies.  This helps drive a vendor or technology neutral framework, focusing more on running IT as a business, rather than conforming to a single vendor’s product or service.  Eventually, like all developing standards, IT4IT may force vendors and systems developers to provide a solid model and framework for developing business solutions, which will support greater interoperability and data sharing between both internal and external organizations.

The visions and objectives for IT4IT include two major components, which are the IT Value Chain and IT4IT Reference Architecture.  Within the IT4IT Core are sections providing guidance, including:

  • IT4IT Abstractions and Class Structures
  • The Strategy to Portfolio Value Stream
  • The Requirement to Deploy Value Stream
  • The Request to Fulfill Value Stream
  • The Detect to Correct Value Stream

Each of the above main sections have borrowed from, or further developed ideas and activities from within ITIL, COBIT, and  TOGAF, but have taken a giant leap including cloud computing, SOAs, and enterprise architecture into the product.

As the IT4IT Reference Architecture is completed, and supporting roadmaps developed, the IT4IT concept will no doubt find a large legion of supporters, as many, if not most, businesses and IT professionals find the certification and knowledge path for ITIL, COBIT, TOGAF, and other supporting frameworks either too expensive, or too time consuming (both in training and implementation).

Take a look at IT4IT at the Open Group’s website, and let us know what you think.  Too light?  Not needed?  A great idea or concept?  Let us know.

Asian Carrier’s Conference 2013 Kicks Off in Cebu

ACC 2013The 2013 ACC kicked off on Tuesday morning with an acknowledgement by Philippine Long Distance Telecommunications (PLDT) CEO Napolean L. Nazareno that “we’re going through a profound and painful transformation to digital technologies.” He continued to explain that in addition to making the move to a digital corporate culture and architecture that for traditional telcos to succeed they will need to “master new skills, including new partnership skills.”

That direction drives a line straight down the middle of attendees at the conference. Surprisingly, many companies attending and advertising their products still focus on “minutes termination,” and traditional voice-centric relationships with other carriers and “voice” wholesalers.

Philippe MilletMatthew Howett, Regulation and Policy Practice Leader for Ovum Research noted ”while fixed and mobile minutes are continuing to grow, traditional voice revenue is on the decline.” He backed the statement up with figures including “Over the Top/OTT” services, which are when a service provider sends all types of communications, including video, voice, and other connections, over an Internet protocol network – most commonly over the public Internet.

Howett informed the ACC’s plenary session attendees that Ovum Research believes up to US$52 billion will be lost in traditional voice revenues to OTT providers by 2016, and an additional US$32,6 billion to instant messaging providers in the same period.

The message was simple to traditional communications carriers – adapt or become irrelevant. National carriers may try to work with government regulators to try and adopt legal barriers to prevent the emergence of OTTs operating in that country, however that is only a temporary step to stem the flow of “technology-enabled” competition and retain revenues.

As noted by Nazareno, the carriers must wake up to the reality we are in a global technology refresh cycle and business visions, expectations, and construct business plans that will not only allow the company to survive, but also meet the needs of their users and national objectives.

Kevin Vachon, MEFMartin Geddes, owner of Martin Geddes Consulting, introduced the idea of “Task Substitution.’” Task Substitution occurs when an individual or organization is able to use a substitute technology or process to accomplish tasks that were previously only available from a single source. One example is the traditional telephone call. In the past you would dial a number, and the telephone company would go through a series of connections, switches, and processes that would both connect two end devices, as well as provide accounting for the call.

The telephone user now has many alternatives to the traditional phone call – all task substitutions. You can use Skype, WebEx, GoToMeeting, instant messaging – any one of a multitude of utilities allowing an individual or group to participate in one to one or many to many communications. When a strong list of alternative methods to complete a task exist, then the original method may become obsolete, or have to rapidly adapt to avoid being discarded by users.

A strong message, which made many attendees visibly uncomfortable.

Ivan Landen, Managing Director at Asia-Pacific Expereo, described the telecom revolution in terms all attendees could easily visualize. “Today around 80% of the world’s population have access to the electrical grid/s, while more than 85% of the population has access to Wireless”

Ivan Landen, ExpereoHe also provided an additional bit of information which did not surprise attendees, but also made some of the telecom representatives a bit uneasy. In a survey Geddes conducted he discovered that more than 1/2 of business executives polled admitted their Internet access was better at their homes than in their offices.” This information can be analyzed in several different ways, from having poor IT planning with the company, to poor UT capacity management within the communication provider, to the reality traffic on consumer networks is simply lower during the business day than during other time periods.

However the main message was “there is a huge opportunity for communication companies to fix business communications.”

The conference continues until Friday. Many more sessions, many more perimeter discussions, and a lot of space for the telecom community to come to grips with the reality “we need to come to grips with the digital world.”

What Value Can I Expect from Cloud Computing Training?

Cloud Computing ClassroomNormally, when we think of technical-related training, images of rooms loaded with switches, routers, and servers might come to mind.    Cloud computing is different.  In reality, cloud computing is not a technology, but rather a framework employing a variety of technologies – most notably virtualization, to solve business problems or enable opportunities.

From our own practice, the majority of cloud training students represent non-technical careers and positions. Our training does follow the CompTIA Cloud Essentials course criterion, and is not a technical course, so the non-technical student trend should not come as any big surprise. 

What does come as a surprise is how enthusiastically our students dig into the topic.  Whether business unit managers, accounting and finance, sales staff, or executives, all students come into class convinced they need to know about cloud computing as an essential part of their future career progression, or even at times to ensure their career survival.

Our local training methodology is based on establishing an indepth knowledge of the NIST Cloud Definitions and Cloud Reference Architecture.  Once the students get beyond a perception such documents are too complex, and that we will refer nearly all aspects of training to both documents, we easily establish a core cloud computing knowledge base needed to explore both technical aspects, and more importantly practical aspects of how cloud computing is used in our daily lives, and likely future lives.

This is not significantly different than when we trained business users on how to use, employ, and exploit  the Internet in the 90s.  Those of us in engineering or technical operations roles viewed this type of training with either amusement or contempt, at times mocking those who did not share our knowledge and experience of internetworking, and ability to navigate the Internet universe.

We are in the same phase of absorbing and developing tacit knowledge of compute and storage access on demand, service-oriented architectures, Software as a Service, the move to a subscription-based application world.

Hamster Food as a Service (HFaaS)Those students who attend cloud computing training leave the class better able to engage in decision-making related to both personal and organizational information and communication technology, and less exposed to the spectrum of cloud washing, or marketing use of “cloud” and “XXX as a Service”  language overwhelming nearly all media on subjects ranging from hamster food to SpaceX and hyper loops.

Even the hardest core engineers who have degraded themselves to join a non-technical business-oriented cloud course walk away with a better view on how their tools support organizational agility (good jargon, no?), in addition to the potential financial impacts, reduced application development cycles, disaster recovery, business continuity, and all the other potential benefits to the organization when adopting cloud computing.

Some even walk away from the course planning a breakup with some of their favorite physical servers.

The Bottom Line

No student has walked away from a cloud computing course knowing less about the role, impact, and potential of implementing cloud in nearly any organization.  While the first few hours of class embrace a lot of great debates on the value of cloud computing, by the end of the course most students agree they are better prepared to consider, envision, evaluate, and address the potential or shortfalls of cloud computing.

Cloud computing is, and will continue to have influence on many aspects of our lives. It is not going away anytime soon.  The more we can learn, either through self-study or resident training, the better position we’ll be in to make intelligent decisions regarding the use and value of cloud in our lives and organizations.

Connecting at the Westin Building Exchange in Seattle

Seattle Washington - Home of WBXInternational telecommunication carriers all share one thing in common – the need to connect with other carriers and networks.  We want to make calls to China, a video conference in Moldova, send an email message for delivery within 5 seconds to Australia – all possible with our current state of global communications.  Magic?  Of course not.  While an abstract to most, the reality is telecommunications physical infrastructure extends to nearly every corner of the world, and communications carriers bring this global infrastructure together at  a small number of facilities strategically placed around the world informally called “carrier hotels.”

Pacific-Tier had the opportunity to visit the Westin Building Exchange (commonly known as the WBX), one of the world’s busiest carrier hotels, in early August.   Located in the heart of Seattle’s bustling business district, the WBX stands tall at 34 stories.  The building also acts as a crossroads of the Northwest US long distance terrestrial cable infrastructure, and is adjacent to trans-Pacific submarine cable landing points.

The world’s telecommunications community needs carrier hotels to interconnect their physical and value added networks, and the WBX is doing a great job in facilitating both physical interconnections between their more than 150 carrier tenants.

“We understand the needs of our carrier and network tenants” explained Mike Rushing,   Business Development Manager at the Westin Building.  “In the Internet economy things happen at the speed of light.  Carriers at the WBX are under constant pressure to deliver services to their customers, and we simply want to make this part of the process (facilitating interconnections) as easy as possible for them.”

Main Distribution Frame at WBXThe WBX community is not limited to carriers.  The community has evolved to support Internet Service Providers, Content Delivery Networks (CDNs), cloud computing companies, academic and research networks, enterprise customers, public colocation and data center operators, the NorthWest GigaPOP, and even the Seattle Internet Exchange Point (SIX), one of the largest Internet exchanges in the world.

“Westin is a large community system,” continued Rushing.  “As new carriers establish a point of presence within the building, and begin connecting to others within the tenant and accessible community, then the value of the WBX community just continues to grow.”

The core of the WBX is the 19th floor meet-me-room (MMR).  The MMR is a large, neutral, interconnection point for networks and carriers representing both US and international companies.  For example, if China Telecom needs to connect a customer’s headquarters in Beijing to an office in Boise served by AT&T, the actual circuit must transfer at a physical demarcation point from China Telecom  to AT&T.  There is a good chance that physical connection will occur at the WBX.

According to Kyle Peters, General Manager of the Westin Building, “we are supporting a wide range of international and US communications providers and carriers.  We fully understand the role our facility plays in supporting not only our customer’s business requirements, but also the role we play in supporting global communications infrastructure.”

You would be correct in assuming the WBX plays an important role in that critical US and global communications infrastructure.  Thus you would further expect the WBX to be constructed and operated in a manner providing a high level of confidence to the community their installed systems will not fail.

Lance Forgey, Director of Operations at the WBX, manages not only the MMR, but also the massive mechanical (air conditioning) and electrical distribution systems within the building.  A former submarine engineer, Forgey runs the Westin Building much like he operated critical systems within Navy ships.  Assisted by an experienced team of former US Navy engineers and US Marines, the facility presents an image of security, order, cleanliness, and operational attention to detail.

“Our operations and facility staff bring the discipline of many years in the military, adding innovation needed to keep up with our customer’s industries” said Forgey.  “Once you have developed a culture of no compromise on quality, then it is easy keep things running.”

That is very apparent when you walk through the site – everything is in its place, it is remarkably clean, and it is very obvious the entire site is the product of a well-prepared plan.

WBX GeneratorsOne area which stands out at the WBX is the cooling and electrical distribution infrastructure.  With space within adjacent external parking structures and additional areas outside of the building most heavy equipment is located outside of the building, providing an additional layer of physical security, and allowing the WBX to recover as much space within the building as possible for customer use.

“Power is not an issue for us”  noted Forgey.  “It is a limiting factor for much of our industry, however at the Westin Building we have plenty, and can add additional power anytime the need arises.”

That is another attraction for the WBX versus some of the other carrier hotels on the West Coast of the US.  Power in Washington State averages around $0.04/kWH, while power in California may be nearly three times as expensive.

“In addition to having all the interconnection benefits similar operations have on the West Coast, the WBX can also significantly lower operating costs for tenants” added Rushing.  As the cost of power is a major factor in data center operations, reducing the cost of operations through a significant reduction in the cost of power is a big issue.

The final area carrier hotels need to address is the ever changing nature of communications, including interconnections between members of the WBX community.  Nothing is static, and the WBX team is constantly communicating with tenants, evaluating changes in supporting technologies, and looking for ways to ensure they have the tools available to meet their rapidly changing environments.

Cloud computing, software-defined networking, carrier Ethernet – all  topics which require frequent communication with tenants to gain insight into their visions, concerns, and plans.  The WBX staff showed great interest in cooperating with their tenants to ensure the WBX will not impede development or implementation of new  technologies, as well as attempt to stay ahead of their customer deployments.

“If a customer comes to us and tells us they need a new support infrastructure or framework with very little lead time, then we may not be able to respond quickly enough to meet their requirements” concluded Rushing.  “Much better to keep an open dialog with customers and become part of their team.”

Pacific-Tier has visited, and evaluated dozens of data centers during the past four years.  Some have been very good, some have been very bad.  Some have gone over the edge in data center deployments, chasing the “grail” of a Tier IV data center certification, while some have been little more than a server closet.

The Westin Building (WBX)The Westin Building / WBX is unique in the industry.  Owned by both Clise Properties of Seattle and Digital Realty Trust,  the Westin Building brings the best of both the real estate world and data centers into a single operation.  The quality of mechanical and electrical infrastructure, the people maintaining the infrastructure, and the vision of the company give a visitor an impression that not only is the WBX a world-class facility, but also that all staff and management know their business, enjoy the business, and put their customers on top as their highest priority.

As Clise Properties owns much of the surrounding land, the WBX has plenty of opportunity to grow as the business expands and changes.  “We know cloud computing companies will need to locate close to the interconnection points, so we better be prepared to deliver additional high-density infrastructure as their needs arise” said Peters.  And in fact Clise has already started planning for their second colocation building.  This building, like its predecessor, will be fully interconnected with the Westin Building, including virtualizing the MMR distribution frames in each building into a single cross interconnection environment.

Westin WBX LogoWBX offers the global telecom industry an alternative to other carrier hotels in Los Angeles and San Francisco. One shortfall in the global telecom industry are the “single threaded” links many have with other carriers in the global community.  California has the majority of North America / Asia carrier interconnections today, but all note California is one of the world’s higher risk options for building critical infrastructure, with the reality it is more a matter of “when” than “if” a catastrophic event such as an earthquake occurs which could seriously disrupt international communications passing through one of the region’s MMRs.

The telecom industry needs to have the option of alternate paths of communications and interconnection points.  While the WBX stands tall on its own as a carrier hotel and interconnection site, it is also the best alternative and diverse landing point for trans-Pacific submarine cable capacity – and subsequent interconnections.

The WBX offers a wide range of customer services, including:

  • Engineering support
  • 24×7 Remote hands
  • Fast turn around for interconnections
  • Colocation
  • Power circuit monitoring and management
  • Private suites and lease space for larger companies
  • 24×7 security monitoring and access control

Check out the Westin Building and WBX the next time you are in Seattle, or if you want to learn more about the telecom community revolving and evolving in the Seattle area.  Contact Mike Rushing at mrushing@westinbldg.com for more information.

 

Data Center Consolidation and Adopting Cloud Computing in 2013

Throughout 2012 large organizations and governments around the world continued to struggle with the idea of consolidating inefficient data centers, server closets, and individual “rogue” servers scattered around their enterprise or government agencies.  Issues dealt with the cost of operating data centers, disaster management of information technology resources, and of course human factors centered on control, power, or retention of jobs in a rapidly evolving IT industry.

Cloud computing and virtualization continue to have an impact on all consolidation discussions, not only from the standpoint of providing a much better model for managing physical assets, but also in the potential cloud offers to solve disaster recovery shortfalls, improve standardization, and encourage or enable development of service-oriented architectures.

Our involvement in projects ranging from local, state, and national government levels in both the United States and other countries indicates a consistent need for answering the following concerns:

  • Existing IT infrastructure, including both IT and facility, is reaching the end of its operational life
  • Collaboration requirements between internal and external users are expanding quickly, driving an architectural need for interoperability
  • Decision support systems require access to both raw data, and “big data/archival data”

We would like to see an effort within the IT community to move in the following directions:

  1. Real effort at decommissioning and eliminating inefficient data centers
  2. All data and applications should be fit into an enterprise architecture framework – regardless of the size of organization or data
  3. Aggressive development of standards supporting interoperability, portability, and reuse of objects and data

Regardless of the very public failures experienced by cloud service providers over the past year, the reality is cloud computing as an IT architecture and model is gaining traction, and is not likely to go away any time soon.  As with any emerging service or technology, cloud services will continue to develop and mature, reducing the impact and frequency of failures.

Future Data CentersWhy would an organization continue to buy individual high powered workstations, individual software licenses, and device-bound storage when the same application can be delivered to a simple display, or wide variety of displays, with standardized web-enabled cloud (SaaS) applications that store mission critical data images on a secure storage system at a secure site?  Why not facilitate the transition from CAPEX to OPEX, license to subscription, infrastructure to product and service development?

In reality, unless an organization is in the hardware or software development business, there is very little technical justification for building and managing a data center.  This includes secure facilities supporting military or other sensitive sites.

The cost of building and maintaining a data center, compared with either outsourcing into a commercial colocation site – or virtualizing data, applications, and network access requirements has gained the attention of CFOs and CEOs, requiring IT managers to more explicitly justify the cost of building internal infrastructure vs. outsourcing.  This is quickly becoming a very difficult task.

Money spent on a data center infrastructure is lost to the organization.  The cost of labor is high, the cost of energy, space, and maintenance is high.  Mooney that could be better applied to product and service development, customer service capacity, or other revenue and customer-facing activities.

The Bandwidth Factor

The one major limitation the IT community will need to overcome as data center consolidation continues and cloud services become the ‘norm, is bandwidth.  Applications, such as streaming video, unified communications, and data intensive applications will need more bandwidth.  The telecom companies are making progress, having deployed 100gbps backbone capacity in many markets.  However this capacity will need to continue growing quickly to meet the needs of organizations needing to access data and applications stored or hosted within a virtual or cloud computing environment.

Consider a national government’s IT requirements.  If the government, like most, are based within a metro area.  The agencies and departments consolidate their individual data centers and server closets into a central or reduced number of facilities.   Government interoperability frameworks begin to make small steps allowing cross-agency data sharing, and individual users need access to a variety of applications and data sources needed to fulfill their decision support requirements.

For example, a GIS (Geospatial/Geographic Information System) with multiple demographic or other overlays.  Individual users will need to display data that may be drawn from several data sources, through GIS applications, and display a large amount of complex data on individual display screens.  Without broadband access between both the user and application, as well as application and data sources, the result will be a very poor user experience.

Another example is using the capabilities of video conferencing, desktop sharing, and interactive persistent-state application sharing.  Without adequate bandwidth this is simply not possible.

Revisiting the “4th Utility” for 2013

The final vision on the 2013 “wishlist” is that we, as an IT industry, continue to acknowledge the need for developing the 4th Utility.  This is the idea that broadband communications, processing capacity (including SaaS applications), and storage is the right of all citizens.  Much like the first three utilities, roads, water, and electricity, the 4th Utility must be a basic part of all discussions related to national, state, or local infrastructure discussions.  As we move into the next millennium, Internet-enabled, or something like Internet-enabled communications will be an essential part of all our lives.

The 4th Utility requires high capacity fiber optic infrastructure and broadband wireless be delivered to any location within the country which supports a community or individual connected to a community.   We’ll have to [pay a fee to access the utility (same as other utilities), but it is our right and obligation to deliver the utility.

2013 will be a lot of fun for us in the IT industry.  Cloud computing is going to impact everybody – one way or the other.  Individual data centers will continue to close.  Service-oriented architectures, enterprise architecture, process modeling, and design efficiency will drive a lot of innovation.   – We’ll lose some players, gain players, and and we’ll be in a better position at the end of 2013 than today.

5 Data Center Technology Predictions for 2012

2011 was a great year for technology innovation.  The science of data center design and operations continued to improve, the move away from mixed-use buildings used as data centers continued, the watts/sqft metric took a second seat to overall kilowatts available to a facility or customer, and the idea of compute capacity and broadband as a utility began to take its place as a basic right of citizens.

However, there are 5 areas where we will see additional significant advances in 2012.

1.  Data Center Consolidation.  The US Government admits it is using only 27% of its overall available compute power.  With 2094 data centers supporting the federal government (from the CIO’s 25 Point Plan  to Reform Fed IT Mgt), the government is required to close at least 800 of those data centers by 2015.

Data Center ConstructionThe lesson is not lost on state and local governments, private industry, or even internet content providers.  The economics of operating a data center or server closet, whether in costs of real estate, power, hardware, in addition to service and licensing agreements, are compelling enough to make even the most fervent server-hugger reconsider their religion.

2.  Cloud Computing.  Who doesn’t believe cloud computing will eventually replace the need for a server closets, cabinets, or even small cages in data centers?  The move to cloud computing is as certain as the move to email was in the 1980s. 

Some IT managers and data owners hate the idea of cloud computing, enterprise service busses, and consolidated data.  Not so much an issue of losing control, but in many cases because it brings transparency to their operation.  If you are the owner of data in a developing country, and suddenly everything you do can be audited by a central authority – well it might make you uncomfortable…

A lesson learned while attending a  fast pitch contest during late 2009 in Irvine, CA…  An enterprising entrepreneur gave his “pitch” to a panel of investment bankers and venture capital representatives.  He stated he was looking for a $5 million investment in his startup company. 

A panelist asked what the money was for, and the entrepreneur stated “.. and $2 million to build out a data center…”  The panelist responded that 90% of new companies fail within 2 years.  Why would he want to be stuck with the liability of a data center and hardware if the company failed? The gentleman further stated, “don’t waste my money on a data center – do the smart thing, use the Amazon cloud.”

3.  Virtual Desktops and Hosted Office Automation.  How many times have we lost data and files due to a failed hard drive, stolen laptop, or virus disrupting our computer?  What is the cost or burden of keeping licenses updated, versions updated, and security patches current in an organization with potentially hundreds of users?  What is the lead time when a user needs a new application loaded on a computer?

From applications as simple as Google Docs, to Microsoft 365, and other desktop replacement applications suites, users will become free from the burden of carrying a heavy laptop computer everywhere they travel.  Imagine being able to connect your 4G/LTE phone’s HDMI port to a hotel widescreen television monitor, and be able to access all the applications normally used at a desktop.  You can give a presentation off your phone, update company documents, or nearly any other IT function with the only limitation being a requirement to access broadband Internet connections (See # 5 below).

Your phone can already connect to Google Docs and Microsoft Live Office, and the flexibility of access will only improve as iPads and other mobile devices mature.

The other obvious benefit is files will be maintained on servers, much more likely to be backed up and included in a disaster recovery plan.

4.  The Science of Data Centers.  It has only been a few years since small hosting companies were satisfied to go into a data center carved out of a mixed-use building, happy to have access to electricity, cooling, and a menu of available Internet network providers.  Most rooms were Data Center Power Requirementsdesigned to accommodate 2~3kW per cabinet, and users installed servers, switches, NAS boxes, and routers without regard to alignment or power usage.

That has changed.  No business or organization can survive without a 24x7x265 presence on the Internet, and most small enterprises – and large enterprises, are either consolidating their IT into professionally managed data centers, or have already washed their hands of servers and other IT infrastructure.

The Uptime Institute, BICSI, TIA, and government agencies have begun publishing guidelines on data center construction providing best practices, quality standards, design standards, and even standards for evaluation.  Power efficiency using metrics such as the PUE/DCiE provide additional guidance on power management, data center management, and design. 

The days of small business technicians running into a data center at 2 a.m. to install new servers, repair broken servers, and pile their empty boxes or garbage in their cabinet or cage on the way out are gone.  The new data center religion is discipline, standards, discipline, and security. 

Electricity is as valuable as platinum, just as cooling and heat are managed more closely than inmates at San Quentin.  While every other standards organization is now offering certification in cabling, data center design, and data center management, we can soon expect universities to offer an MS or Ph.D in data center sciences.

5.  The 4th Utility Gains Traction.  Orwell’s “1984” painted a picture of pervasive government surveillance, and incessant public mind control (Wikipedia).  Many people believe the Internet is the source of all evil, including identity theft, pornography, crime, over-socialization of cultures and thoughts, and a huge intellectual time sink that sucks us into the need to be wired or connected 24 hours a day.

Yes, that is pretty much true, and if we do not consider the 1000 good things about the Internet vs. each 1 negative aspect, it might be a pretty scary place to consider all future generations being exposed and indoctrinated.  The alternative is to live in a intellectual Brazilian or Papuan rain forest, one step out of the evolutionary stone age.

The Internet is not going away, unless some global repressive government, fundamentalist religion, or dictator manages to dismantle civilization as we know it.

The 4th utility identifies broadband access to the ‘net as a basic right of all citizens, with the same status as roads, water, and electricity.  All governments with a desire to have their nation survive and thrive in the next millennium will find a way to cooperate with network infrastructure providers to build out their national information infrastructure (haven’t heard that term since Al Gore, eh?).

Without a robust 4th utility, our children and their children will produce a global generation of intellectual migrant workers, intellectual refugees from a failed national information sciences vision and policy.

2012 should be a great year.  All the above predictions are positive, and if proved true, will leave the United States and other countries with stronger capacities to improve their national quality of life, and bring us all another step closer.

Happy New Year!

%d bloggers like this: