PTC 2015 Wraps Up with Strong Messages on SDNs and Automation

Software Defined Networking and Network Function Virtualization (NVF) themes dominated workshops and side conversations throughout the PTC 2015 venue in Honolulu, Hawai’i this week.

Carrier SDNs SDNs, or more specifically provisioning automation platforms service provider interconnections, and have crept into nearly all marketing materials and elevator pitches in discussions with submarine cable operators, networks, Internet Exchange Points, and carrier hotels.

While some of the material may have included a bit of “SDN Washing,” for the most part each operators and service provider engaging in the discussion understands and is scrambling to address the need for communications access, and is very serious in their acknowledgement of a pending industry “Paradigm shift” in service delivery models.

Presentations by companies such as Ciena and Riverbed showed a mature service delivery structure based on SDNS, while PacNet and Level 3 Communications (formerly TW Telecom) presented functional on-demand self-service models of both service provisioning and a value added market place.

Steve Alexander from Ciena explained some of the challenges which the industry must address such as development of cross-industry SDN-enabled service delivery and provisioning standards.  In addition, as service providers move into service delivery automation, they must still be able to provide a discriminating or unique selling point by considering:

  • How to differentiate their service offering
  • How to differentiate their operations environment
  • How to ensure industry-acceptable delivery and provisioning time cycles
  • How to deal with legacy deployments

Alexander also emphasized that as an industry we need to get away from physical wiring when possible.   With 100Gbps ports, and the ability to create a software abstraction of individual circuits within the 100gbps resource pool (as an example), there is a lot of virtual or logical provision that can be accomplished without the need for dozens or hundreds off physical cross connections.

The result of this effort should be an environment within both a single service provider, as well as in a broader community marketplace such as a carrier hotel or large telecomm interconnection facility (i.e., The Westin Building, 60 Hudson, One Wilshire).  Some examples of actual and required deployments included:

  • A bandwidth on-demand marketplace
  • Data center interconnections, including within data center operators which have multiple interconnected meet-me-points spread across a geographic area
  • Interconnection to other services within the marketplace such as cloud service providers (e.g., Amazon Direct Connect, Azure, Softlayer, etc), content delivery networks, SaaS, and disaster recovery capacity and services

Robust discussions on standards also spawned debated.  With SDNs, much like any other emerging use of technologies or business models, there are both competing and complimentary standards.  Even terms such as Network Function Virtualization / NFV, while good, do not have much depth within standard taxonomies or definitions.

During the PTC 2015 session entitled  “Advanced Capabilities in the Control Plane Leveraging SDN and NFV Toward Intelligent Networks” a long listing of current standards and products supporting the “concpet” of SDNs was presented, including:

  • Open Contrail
  • Open Daylight
  • Open Stack
  • Open Flow
  • OPNFV
  • ONOS
  • OvS
  • Project Floodlight
  • Open Networking
  • and on and on….

For consumers and small network operators this is a very good development, and will certainly usher in a new era of on-demand self-service capacity provisioning, elastic provisioning (short term service contracts even down to the minute or hour), carrier hotel-based bandwidth and service  marketplaces, variable usage metering and costs, allowing a much better use of OPEX budgets.

For service providers (according to discussions with several North Asian telecom carriers), it is not quite as attractive, as they generally would like to see long term, set (or fixed) contracts or wholesale capacity sales.

The connection and integration of cloud services with telecom or network services is quite clear.  At some point provisioning of both telecom and compute/storage/application services will be through a single interface, on-demand, elastic (use only what you need and for only as long as you need it), usage-based (metered), and favor the end user.

While most operators get the message, and are either in the process of developing and deploying their first iteration solution, others simply still have a bit of homework to do.  In the words of one CEO from a very large international data center company, “we really need to have a strategy to deal with this multi-cloud, hybrid cloud, or whatever you call it thing.”

Oh my…

OSS Development for the Modern Data Center

Modern Data Centers are very complex environments.  Data center operators must have visibility into a wide range of integrated data bases, applications, and performance indicators to effectively understand and manage their operations and activities.

While each data center is different, all Data Centers share some common systems and common characteristics, including:

  • Facility inventories
  • Provisioning and customer fulfillment processes
  • Maintenance activities (including computerized maintenance management systems <CMMS>)
  • Monitoring
  • Customer management (including CRM, order management, etc.)
  • Trouble management
  • Customer portals
  • Security Systems (physical access entry/control and logical systems management)
  • Billing and Accounting Systems
  • Service usage records (power, bandwidth, remote hands, etc.)
  • Decision support system and performance management integration
  • Standards for data and applications
  • Staffing and activities-based management
  • Scheduling /calendar
  • etc…

Unfortunately, in many cases, the above systems are either done manually, have no standards, and had no automation or integration interconnecting individual back office components.  This also includes many communication companies and telecommunications carriers which previously either adhered, or claimed to adhere to Bellcore data and operations standards.

In some cases, the lack of integration is due to many mergers and acquisitions of companies which have unique, or non standard back office systems.  The result is difficulty in cross provisioning, billing, integrated customer management systems, and accounting – the day to day operations of a data center.

Modern data centers must have a high level of automation.  In particular, if a data center operator owns multiple facilities, it becomes very difficult to have a common look and feel or high level of integration allowing the company to offer a standardized product to their markets and customers.

Operational support systems or OSS, traditionally have four main components which include:

  • Support for process automation
  • Collection and storage for a wide variety of operational data
  • The use of standardized data structures and applications
  • And supporting technologies

With most commercial or public colocation and Data Centers customers and tenants organizations represent many different industries, products, and services.  Some large colocation centers may have several hundred individual customers.  Other data centers may have larger customers such as cloud service providers, content delivery networks, and other hosting companies.  While single large customers may be few, their internal hosted or virtual customers may also be at the scale of hundreds, or even thousands of individual customers.

To effectively support their customers Data Centers must have comprehensive OSS capabilities.  Given the large number of processes, data sources, and user requirements, the OSS should be designed and developed using a standard architecture and framework which will ensure OSS integration and interoperability.

OSS Components We have conducted numerous Interoperability Readiness surveys with both governments and private sector (commercial) data center operators during the past five years.  In more than 80% of surveys processes such as inventory management have been built within simple spreadsheets.  Provisioning of inventory items was normally a manual process conducted via e-mail or in some cases paper forms.

Provisioning, a manual process, resulted in some cases of double booked or double sold inventory items, as well as inefficient orders for adding additional customer-facing inventory or build out of additional data center space.

The problem often further compounded into additional problems such as missing customer billing cycles, accounting shortfalls, and management or monitoring system errors.

The new data center, including virtual data centers within cloud service providers, must develop better OSS tools and systems to accommodate the rapidly changing need for elasticity and agility in ICT systems.  This includes having as single window for all required items within the OSS.

Preparing an OSS architecture, based on a service-oriented architecture (SOA), should include use of ICT-friendly frameworks and guidance such as TOGAF and/or ITIL to ensure all visions and designs fully acknowledge and embrace the needs of each organization’s business owners and customers, and follow a comprehensive and structured development process to ensure those objectives are delivered.

Use of standard databases, APIs, service busses, security, and establishing a high level of governance to ensure a “standards and interoperability first” policy for all data center IT will allow all systems to communicate, share, reuse, and ultimately provide automated, single source data resources into all data center, management, accounting, and customer activities.

Any manual transfer of data between offices, applications, or systems must be prevented, preferring to integrate inventory, data collections and records, processes, and performance management indicators into a fully integrated and interoperable environment.  A basic rule of thought might be that if a human being has touched data, then the data likely has been either corrupted or its integrity may be brought into question.

Looking ahead to the next generation of data center services, stepping a bit higher up the customer service maturity continuum requires much higher levels of internal process and customer process automation.

Similar to NIST’s definition of cloud computing, stating the essential characteristics of cloud computing include “self-service provisioning,” “rapid elasticity,” ”measured services,” in addition to resource pooling and broadband access, it can be assumed that data center users of the future will need to order and fulfill services such as network interconnections, power, virtual space (or physical space), and other services through self-service, or on-demand ordering.

The OSS must strive to meet the following objectives:

  • Standardization
  • Interoperability
  • Reusable components and APIs
  • Data sharing

To accomplish this will require nearly all above mentioned characteristics of the OSS to have inventories in databases (not spreadsheets), process automation, and standards in data structure, APIs, and application interoperability.

And as the ultimate key success factor, management DSS will finally have potential for development of true dashboard for performance management, data analytics, and additional real-time tools for making effective organizational decisions.

You Want Money for a Data Center Buildout?

Yield to Cloud A couple years ago I attended several “fast pitch” competitions and events for entrepreneurs in Southern California, all designed to give startups a chance to “pitch” their ideas in about 60 seconds to a panel of representatives from the local investment community.  Similar to television’s “Shark Tank,” most of the ideas pitches were harshly critiqued, with the real intent of assisting participating entrepreneurs in developing a better story for approaching investors and markets.

While very few of the pitches received a strong, positive response, I recall one young guy who really set the panel back a step in awe.  The product was related to biotech, and the panel provided a very strong, positive response to the pitch.

Wishing to dig a bit deeper, one of the panel members asked the guy how much money he was looking for in an investment, and how he’d use the money.

“$5 million he responded,” with a resounding wave of nods from the panel.  “I’d use around $3 million for staffing, getting the office started, and product development.”  Another round of positive expressions.  “And then we’d spend around $2 million setting up in a data center with servers, telecoms, and storage systems.”

This time the panel looked as if they’d just taken a crisp slap to the face.  After a moment of collection, the panel spokesman launched into a dress down of the entrepreneur stating “I really like the product, and think you vision is solid.  However, with a greater then 95% chance of your company going bust within the first year, I have no desire to be stuck with $2 million worth of obsolete computer hardware, and potentially contract liabilities once you shut down your data center.  You’ve got to use your head and look at going to Amazon for your data center capacity and forget this data center idea.”

Now it was the entire audience’s turn to take a pause.

In the past IT managers really placed buying and controlling their own hardware, in their own facility, as a high priority – with no room for compromise.  For perceptions of security, a desire for personal control, or simply a concern that outsourcing would limit their own career potential, sever closets and small data centers were a common characteristic of most small offices.

At some point a need to have proximity to Internet or communication exchange points, or simple limitations on local facility capacity started forcing a migration of enterprise data centers into commercial colocation.  For the most part, IT managers still owned and controlled any hardware outsourced into the colocation facility, and most agreed that in general colocation facilities offered higher uptime, fewer service disruptions, and good performance, in particular for eCommerce sites.

Now we are at a new IT architecture crossroads.  Is there really any good reason for a startup, medium, or even large enterprise to continue operating their own data center, or even their own hardware within a colocation facility?  Certainly if the average CFO or business unit manager had their choice, the local data center would be decommissioned and shut down as quickly as possible.  The CAPEX investment, carrying hardware on the books for years of depreciation, lack of business agility, and dangers of business continuity and disaster recovery costs force the question of “why don’t we just rent IT capacity from a cloud service provider?”

Many still question the security of public clouds, many still question the compliance issues related to outsourcing, and many still simply do not want to give up their “soon-to-be-redundant” data center jobs.

Of course it is clear most large cloud computing companies have much better resources available to manage security than a small company, and have made great advances in compliance certifications (mostly due to the US government acknowledging the role of cloud computing and changing regulations to accommodate those changes).  If we look at the US Government’s FedRAMP certification program as an example, security, compliance, and management controls are now a standard – open for all organizations to study and adopt as appropriate.

So we get back to the original question, what would justify a company in continuing to develop data centers, when a virtual data center (as the first small step in adopting a cloud computing architecture) will provide better flexibility, agility, security, performance, and lower cost than operating a local of colocated IT physical infrastructure?  Sure, exceptions exist, including some specialized interfaces on hardware to support mining, health care, or other very specialized activities.  However if you re not in the computer or switch manufacturing business – can you really continue justifying CAPEX expenditures on IT?

IT is quickly becoming a utility.  As a business we do not plan to build roads, build water distribution, or build our own power generation plants.  Compute, telecom, and storage resources are becoming a utility, and IT managers (and data center / colocation companies) need to do a comprehensive review of their business and strategy, and find a way to exploit this technology reality, rather than allow it to pass us by.

Gartner Data Center Conference Looks Into Open Source Clouds and Data Backup

LV-2Day two of the Gartner Data Center Conference in Las Vegas continued reinforcing old topics, appearing at times to be either enlist attendees in contributing to Gartner research, or simply providing conference content directed to promoting conference sponsors.

For example, sessions “To the Point:  When Open Meets Cloud” and “Backup/Recovery: Backing Up the Future” included a series of audience surveys.  Those surveys were apparently the same as presented, in the same sessions, for several years.  Thus the speaker immediately referenced this year’s results vs. results from the same survey questions from the past two years.  This would lead a casual attendee to believe nothing radically new is being presented in the above topics, and the attendees are generally contributing to further trend analysis research that will eventually show up in a commercial Gartner Research Note.

Gartner analyst and speaker on the topic of “When Open Meets Clouds,” Aneel Lakhani, did make a couple useful, if not obvious points in his presentation.

  • We cannot secure complete freedom from vendors, regardless of how much you adopt open source
  • Open source can actually be more expensive than commercial products
  • Interoperability is easy to say, but a heck of a lot more complicated to implement
  • Enterprise users have a very low threshold for “test” environments (sorry DevOps guys)
  • If your organization has the time and staff, test, test, and test a bit more to ensure your open source product will perform as expected or designed

However analyst Dave Russell, speaker on the topic of “Backup/Recovery” was a bit more cut and paste in his approach.  Lots of questions to match against last year’s conference, and a strong emphasis on using tape as a continuing, if not growing media for disaster recovery.

Problem with this presentation was the discussion centered on backing up data – very little on business continuity.  In fact, in one slide he referenced a recovery point objective (RPO) of one day for backups.   What organization operating in a global market, in Internet time, can possibly design for a one day RPO?

In addition, there was no discussion on the need for compatible hardware in a disaster recovery site that would allow immediate or rapid restart of applications.  Having data on tape is fine.  Having mainframe archival data is fine.  But without a business continuity capability, it is likely any organization will suffer significant damage in their ability to function in their marketplace.  Very few organizations today can absorb an extended global presence outage or marketplace outage.

The conference continues until Thursday and we will look for more, positive approaches, to data center and cloud computing.

Gartner Data Center Conference Yields Few Surprises

Gartner’s 2012 Data Center Conference in Las Vegas is noted for  not yielding any major surprise.  While having an uncanny number of attendees (*the stats are not available, however it is clear they are having a very good conference), most of the sessions appear to be simply reaffirming what everybody really knows already, serving to reinforce the reality data center consolidation, cloud computing, big data, and the move to an interoperable framework will be part of everybody’s life within a few years.

Childs at Gartner ConferenceGartner analyst Ray Paquet started the morning by drawing a line at the real value of server hardware in cloud computing.  Paquet stressed that cloud adopters should avoid integrated hardware solutions based on blade servers, which carry a high margin, and focus their CAPEX on cheaper “skinless” servers.  Paquet emphasized that integrated solutions are a “waste of money.”

Cameron Haight, another Gartner analyst, fired a volley at the process and framework world, with a comparison of the value DevOps brings versus ITIL.  Describing ITIL as a cumbersome burden to organizational agility, DevOps is a culture-changer that allows small groups to quickly respond to challenges.  Haight emphasized the frequently stressful relationship between development organizations and operations organizations, where operations demands stability and quality, and development needs freedom to move projects forward, sometimes without the comfort of baking code to the standards preferred by operations – and required by frameworks such as ITIL.

Haight’s most direct slide described De Ops as being “ITIL minus CRAP.”  Of course most of his supporting slides for moving to DevOps looked eerily like an ITIL process….

Other sessions attended (by the author) included “Shaping Private Clouds,” a WIPRO product demonstration, and a data center introduction by Raging Wire.  All valuable introductions for those who are considering making a major change in their internal IT deployments, but nothing cutting edge or radical.

The Raging Wire data center discussion did raise some questions on the overall vulnerability of large box data centers.  While it is certainly possible to build a data center up to any standard needed to fulfill a specific need, the large data center clusters in locations such as Northern Virginia are beginning to appear very vulnerable to either natural, human, or equipment failure disruptions.  In addition to fulfilling data center tier classification models as presented by the Uptime Institute, it is clear we are producing critical national infrastructure which if disrupted could cause significant damage to the US economy or even social order.

Eventually, much like the communications infrastructure in the US, data centers will need to come under the observation or review of a national agency such as Homeland Security.  While nobody wants a government officer in the data center, protection of national infrastructure is a consideration we probably will not be able to avoid for long.

Raging Wire also noted that some colocation customers, particularly social media companies, are hitting up to 8kW per cabinet.  Also scary if true, and in extended deployments.  This could result in serious operational problems if cooling systems were disrupted, as the heat generated in those cabinets will quickly become extreme.  Would also be interesting if companies like Raging Wire and other colocation companies considered developing a real time CFD monitor for their data center floors allowing better monitoring and predictability than simple zone monitoring solutions.

The best presentation of the day came at the end, “Big Data is Coming to Your Data Center.”  Gartner’s Sheila Childs brought color and enthusiasm to a topic many consider, well, boring.  Childs was able to bring the value, power, and future of big data into a human consumable format that kept the audience in their seats until the end of session at 6 p.m. in the late afternoon.

Childs hit on concepts such as “dark data” within organizations, the value of big data in decision support systems (DSS), and the need for developing and recruiting skilled staff who can actually write or build the systems needed to fully exploit the value of big data.  We cannot argue that point, and can only hope our education system is able to focus on producing graduates with the basic skills needed to fulfill that requirement.

5 Data Center Technology Predictions for 2012

2011 was a great year for technology innovation.  The science of data center design and operations continued to improve, the move away from mixed-use buildings used as data centers continued, the watts/sqft metric took a second seat to overall kilowatts available to a facility or customer, and the idea of compute capacity and broadband as a utility began to take its place as a basic right of citizens.

However, there are 5 areas where we will see additional significant advances in 2012.

1.  Data Center Consolidation.  The US Government admits it is using only 27% of its overall available compute power.  With 2094 data centers supporting the federal government (from the CIO’s 25 Point Plan  to Reform Fed IT Mgt), the government is required to close at least 800 of those data centers by 2015.

Data Center ConstructionThe lesson is not lost on state and local governments, private industry, or even internet content providers.  The economics of operating a data center or server closet, whether in costs of real estate, power, hardware, in addition to service and licensing agreements, are compelling enough to make even the most fervent server-hugger reconsider their religion.

2.  Cloud Computing.  Who doesn’t believe cloud computing will eventually replace the need for a server closets, cabinets, or even small cages in data centers?  The move to cloud computing is as certain as the move to email was in the 1980s. 

Some IT managers and data owners hate the idea of cloud computing, enterprise service busses, and consolidated data.  Not so much an issue of losing control, but in many cases because it brings transparency to their operation.  If you are the owner of data in a developing country, and suddenly everything you do can be audited by a central authority – well it might make you uncomfortable…

A lesson learned while attending a  fast pitch contest during late 2009 in Irvine, CA…  An enterprising entrepreneur gave his “pitch” to a panel of investment bankers and venture capital representatives.  He stated he was looking for a $5 million investment in his startup company. 

A panelist asked what the money was for, and the entrepreneur stated “.. and $2 million to build out a data center…”  The panelist responded that 90% of new companies fail within 2 years.  Why would he want to be stuck with the liability of a data center and hardware if the company failed? The gentleman further stated, “don’t waste my money on a data center – do the smart thing, use the Amazon cloud.”

3.  Virtual Desktops and Hosted Office Automation.  How many times have we lost data and files due to a failed hard drive, stolen laptop, or virus disrupting our computer?  What is the cost or burden of keeping licenses updated, versions updated, and security patches current in an organization with potentially hundreds of users?  What is the lead time when a user needs a new application loaded on a computer?

From applications as simple as Google Docs, to Microsoft 365, and other desktop replacement applications suites, users will become free from the burden of carrying a heavy laptop computer everywhere they travel.  Imagine being able to connect your 4G/LTE phone’s HDMI port to a hotel widescreen television monitor, and be able to access all the applications normally used at a desktop.  You can give a presentation off your phone, update company documents, or nearly any other IT function with the only limitation being a requirement to access broadband Internet connections (See # 5 below).

Your phone can already connect to Google Docs and Microsoft Live Office, and the flexibility of access will only improve as iPads and other mobile devices mature.

The other obvious benefit is files will be maintained on servers, much more likely to be backed up and included in a disaster recovery plan.

4.  The Science of Data Centers.  It has only been a few years since small hosting companies were satisfied to go into a data center carved out of a mixed-use building, happy to have access to electricity, cooling, and a menu of available Internet network providers.  Most rooms were Data Center Power Requirementsdesigned to accommodate 2~3kW per cabinet, and users installed servers, switches, NAS boxes, and routers without regard to alignment or power usage.

That has changed.  No business or organization can survive without a 24x7x265 presence on the Internet, and most small enterprises – and large enterprises, are either consolidating their IT into professionally managed data centers, or have already washed their hands of servers and other IT infrastructure.

The Uptime Institute, BICSI, TIA, and government agencies have begun publishing guidelines on data center construction providing best practices, quality standards, design standards, and even standards for evaluation.  Power efficiency using metrics such as the PUE/DCiE provide additional guidance on power management, data center management, and design. 

The days of small business technicians running into a data center at 2 a.m. to install new servers, repair broken servers, and pile their empty boxes or garbage in their cabinet or cage on the way out are gone.  The new data center religion is discipline, standards, discipline, and security. 

Electricity is as valuable as platinum, just as cooling and heat are managed more closely than inmates at San Quentin.  While every other standards organization is now offering certification in cabling, data center design, and data center management, we can soon expect universities to offer an MS or Ph.D in data center sciences.

5.  The 4th Utility Gains Traction.  Orwell’s “1984” painted a picture of pervasive government surveillance, and incessant public mind control (Wikipedia).  Many people believe the Internet is the source of all evil, including identity theft, pornography, crime, over-socialization of cultures and thoughts, and a huge intellectual time sink that sucks us into the need to be wired or connected 24 hours a day.

Yes, that is pretty much true, and if we do not consider the 1000 good things about the Internet vs. each 1 negative aspect, it might be a pretty scary place to consider all future generations being exposed and indoctrinated.  The alternative is to live in a intellectual Brazilian or Papuan rain forest, one step out of the evolutionary stone age.

The Internet is not going away, unless some global repressive government, fundamentalist religion, or dictator manages to dismantle civilization as we know it.

The 4th utility identifies broadband access to the ‘net as a basic right of all citizens, with the same status as roads, water, and electricity.  All governments with a desire to have their nation survive and thrive in the next millennium will find a way to cooperate with network infrastructure providers to build out their national information infrastructure (haven’t heard that term since Al Gore, eh?).

Without a robust 4th utility, our children and their children will produce a global generation of intellectual migrant workers, intellectual refugees from a failed national information sciences vision and policy.

2012 should be a great year.  All the above predictions are positive, and if proved true, will leave the United States and other countries with stronger capacities to improve their national quality of life, and bring us all another step closer.

Happy New Year!

Charting the Future of Small Data Centers

Every week a new data center hits the news with claims of greater than 100,000 square feet at >300 watts/square foot, and levels of security rivaling that of the NSA.  Hot and cold aisle containment, marketing people slinging terms such as PUE (Power Utilization Efficiency), modular data centers, containers, computational fluid dynamics, and outsourcing with such smoothness and velocity that even used car salesmen regard them in complete awe.

Don’t get me wrong, outsourcing your enterprise data center or Internet site into a commercial data center (colocation), or cloud computing-supported virtual data center, is not a bad thing.  As interconnections between cities are reinforced, and sufficient levels of broadband access continues to find its way to both business and residences throughout the country – not to mention all the economic drivers such as OPEX, CAPEX, and flexibility in cloud environments, the need or requirement to maintain an internal data center or server closet makes little sense.

Small Data Centers Feel Pain

Small Data Center Cabinet LineupIn the late 1990s data center colocation started to develop roots.  The Internet was becoming mature, and eCommerce, entertainment, business-to-business, academic, government IT operations found proximity to networks a necessity, and the colocation industry formed to meet the opportunity stimulated by Internet adoption.

Many of these data centers were built in “mixed use” buildings, or existing properties in city centers which were close to existing telecommunication infrastructure.  In cities such as Los Angeles, the commercial property absorption in city centers was at a low, providing very available and affordable space for the emerging colocation industry.

The power densities in those early days was minimal, averaging somewhere around 70 watts/square foot.  Thus, equipment installed in colocation space carved out of office buildings was manageable through over-subscribing air conditioning within the space.  The main limitation in the early colocation days was floor loading within an office space, as batteries and equipment cabinets within colocation areas would stretch building structures to their limits.

As the data center industry, and Internet content hosting continued to grow, the amount of equipment being placed in mixed-use building colocation centers finally started reaching a breaking point in ~2005.  The buildings simply could not support the requirement for additional power, cooling, backup generators needed to support the rapidly developing data center market.

Around that time a new generation of custom-built data center properties began construction, with very little limitation on either weight, power consumption, cooling requirements, or creativity in custom designs of space to gain greatest PUE factors and move towards “green” designs.

The “boom town” inner-city data centers then began experiencing difficulty attracting new customers and retaining their existing customer base.  Many of the “dot com” customers ran out of steam during this period, going bankrupt or abandoning their cabinets and cages, while new data center customers fit into a few categories:

  • High end hosting and content delivery networks (CDNs), including cloud computing
  • Enterprise outsourcing
  • Telecom companies, Internet Service Providers, Network Service Providers

With few exceptions these customers demanded much higher power densities, physical security, redundancy, reliability, and access to large numbers of communication providers.  Small data centers operating out of office building space find it very difficult to meet demands of high end users, and thus the colocation community began a migration the larger data centers.  In addition, the loss of cash flow from “dot com” churn forced many data centers to shut down, leaving much of the small data center industry in ruins.

Data Center Consolidation and Cloud Computing Compounds the Problem

New companies are finding it very difficult to justify spending money on physical servers and basic software licenses.  If you are able to spool up servers and storage on demand through a cloud service provider – why waste the time and money trying to build your own infrastructure – even infrastructure outsourced or colocated in a small data center?  It is simply a bad investment for most companies to build data centers – particularly if the cloud service provider has inherent disaster recovery and backup utility.

Even existing small eCommerce sites hitting refresh cycles for their hardware and software find it difficult to continue one or two cabinet installations within small data centers when they can accomplish the same thing, for a lower cost, and receive higher performance refreshing in a cloud service provider.

Even the US Government, as the world’s largest IT user has turned its back on small data center installations throughout federal government agencies.

The goals of the Federal Data Center Consolidation Initiative are to assist agencies in identifying their existing data center assets and to formulate consolidation plans that include a technical roadmap and consolidation targets. The Initiative aims to address the growth of data centers and assist agencies in leveraging best practices from the public and private sector to:

  • Promote the use of Green IT by reducing the overall energy and real estate footprint of government data centers;
  • Reduce the cost of data center hardware, software and operations;
  • Increase the overall IT security posture of the government; and,
  • Shift IT investments to more efficient computing platforms and technologies.

To harness the benefits of cloud computing, we have instituted a Cloud First policy. This policy is intended to accelerate the pace at which the government will realize the value of cloud computing by requiring agencies to evaluate safe, secure cloud computing options before making any new investments. (Federal Cloud Computing Strategy)
Adding similar initiatives in the UK, Australia, Japan, Canada, and other countries to eliminate inefficient data center programs, and the level of attention being given to these initiatives in the private sector, it is a clear message that inefficient data center installations may become an exception.

Hope for Small Data Centers?

Absolutely!  There will always be a compelling argument for proximity of data and applications to end users.  Whether this be enterprise data, entertainment, or disaster recovery and business continuity, there is a need for well built and managed data centers outside of the “Tier 1” data center industry.

dc2However, this also means data center operators will need to upgrade their existing facilities to meet the quality and availability standard/requirements of a wired global network-enabled community.  Internet and applications/data access is no longer a value-added service, it is critical infrastructure.

Even the most “shoestring” budget facility will need to meet basic standards published by BICSI (Ex BICSI 2010-002), the Telecom Industry Association (TIA-942), or even private organizations such as the Uptime Institute.

With the integration of network-enabled everything into business and social activities, investors and insurance companies are demanding audits of data centers, using audit standards such as SAS70 to provide confidence their investments are protected with satisfactory operational process and construction.

Even if a data center cannot provide 100,000 square feet of 300 watt space, but can provide the local market with adequate space and quality to meet customer needs, there will be a market.

This is particularly true for customers who require flexibility in service agreements, custom support, a large selection of telecommunications companies available within the site, and have a need for local business continuity options.  Hosting a local Internet exchange point or carrier Ethernet exchange within the facility would also make the space much more attractive.

The Road Ahead

Large data centers and cloud service providers are continuing to expand, developing their options and services to meet the growing data center consolidation and virtualization trend within both enterprise and global Internet-facing community.  This makes sense, and will provide a very valuable service for a large percentage of the industry.

Small data centers in Tier 1 cities (in the US that would include Los Angeles, the Northern California Bay Area, New York, Northern Virginia/DC/MD) are likely to find difficulty competing with extremely large data centers – unless they are able to provide a very compelling service such as hosting a large carrier hotel (network interconnection point), Internet Exchange Point, or Cloud Exchange.

However, there will always be a need for local content delivery, application (and storage) hosting, disaster recovery, and network interconnection.  Small data centers will need to bring their facilities up to international standards to remain competitive, as their competition is not local, but large data centers in Tier 1 cities.

The Bell Tolls for Data Centers

MC900250330In the good old days (late 90s and most of the 2000s) data center operators loved selling individual cabinets to customers.  You could keep your prices high for the cabinet, sell power by the “breakered amp,” and try to maximize cross connects  through a data center meet me room.  All designed to squeeze the most revenue and profit out of each individual cabinet, with the least amount of infrastructure burden.

Forward to 2010.  Data center consolidation has become an overwhelming theme, emphasized by the US CIO Vivek Kundra’s mandate to force the US government, as the world’s largest IT user, to eliminate most of more than 1600 federal government owned and operated data centers (into about a dozen), and further promote efficiency by adopting cloud computing.

The Gold Standard of Data Center Operators hits  Speed Bump

Equinix (EQIX) has a lot of reasons and explanations for their expected failure to meet 3rd quarter revenue targets.  Higher than expected customer churn, reducing pricing to acquire new business, additional accounting for the Switch and Data acquisition, etc., etc., etc…

The bottom line is –  the data center business is changing.  Single cabinet customers are looking at hosted services as an economical and operational alternative to maintaining their own infrastructure.  Face it, if you are paying for a single cabinet to house your 4 or 5 servers in a data center today, you will probably have a much better overall experience if you can migrate that minimal web-facing or customer facing equipment into a globally distributed cloud.

Likewise, cloud service providers are supporting the same level of Internet peering as most content delivery networks (CDNs) and internet Service Providers (ISPs), allowing the cloud user to relieve themselves of the additional burden of operating expensive switching equipment.  The user can still decide which peering, ISP, or network provider they want on the external side of the cloud, however the physical interconnections are no longer necessary within that expensive cabinet.

The traditional data centers are beginning to experience the move to shared cloud services, as is Equinix, through higher churn rates and lower sales rates for those individual cabinets or small cages.

The large enterprise colocation users or CDNs continue to grow larger, adding to their ability to renegotiate contracts with the data centers.  Space, cross connects, power, and service level agreements favor the large footprint and power users, and the result is data centers are further becoming a highly skilled, sophisticated, commodity.

The Next Generation Data Center

There are several major factors influencing data center planners today.  Those include the impact of cloud computing, emergence of containerized data centers, the need for far great energy efficiency (often using PUE-Power Utilization Effectiveness) as the metric, and the industry drive towards greater data center consolidation.

Hunter Newby, CEO of Allied Fiber, strongly believes ”Just as in the last decade we saw the assembly of disparate networks in to newly formed common, physical layer interconnection facilities in major markets we are now seeing a real coordinated global effort to create new and assemble the existing disparate infrastructure elements of dark fiber, wireless towers and data centers. This is the next logical step and the first in the right direction for the next decade and beyond.”

We are also seeing data center containers popping up along the long fiber routes, adjacent to traditional breaking points such as in-line amplifiers (ILAs), fiber optic terminals (locations where carriers physically interconnect their networks either for end-user provisioning, access to metro fiber networks, or redundancy), and wireless towers. 

So does this mean the data center of the future is not necessarily confined to large 500 megawatt data center farms, and is potentially something that becomes an inherent part of the transmission network?  The computer is the network, the network is the computer, and all other variations in between?

For archival and backup purposes, or caching purposes, can data exist in a widely distributed environment?

Of course latency within the storage and processing infrastructure will still be dependent on physics for the near term, actually, for end user applications such as desktop virtualization, there really isn’t any particular reason that we MUST have that level of proximity…  And there probably are ways we can “spoof” the systems to think they are located together, and there are a host of other reasons why we do not have to limit ourselves to a handful of “Uber Centers…”

A Vision for Future Data Centers

What if broadband and compute/storage capacity become truly insulated from the user.  What if Carr’s ideas behind the Big Switch are really the future of computing as we know it, and our interface to the “compute brain” is limited to dumb devices, and that we no longer have to concern ourselves with anything other than writing software against a well publicized set of standards?

What if the next generation of Equinix is a partner to Verizon or AT&T, and Equinix builds a national compute and storage utility distributed along the fiber routes that is married to the communications infrastructure transmission network?

What if our monthly bill for entertainment, networking, platform, software, and communications is simply the record of how much utility we used during the month, or our subscription fee for the month? 

What if wireless access is transparent, and globally available to all mobile and stationary terminals without reconfiguration and a lot of pain?

No more “remote hands” bills, midnight trips to the data center to replace a blown server or disk, dealing with unfriendly or unknowledgeable  “support” staff, or questions of who trashed the network due to a runaway virus or malware commando…

Kind of an interesting idea.

Probably going to happen one of these days.

Now if we can extend that utility to all airlines so I can have 100% wired access, 100% of the time.

Data Centers Hitting a Wall of Cloud Computing

Equinix lowers guidance due to higher than expected churn in its data centers and price erosion on higher end customers.  Microsoft continues to promote hosted solutions and cloud computing.  Companies from Lee Technologies, CirraScale, Dell, HP, and SGI are producing containerized data centers to improve efficiency, cost, and manageability of high density server deployments.

The data center is facing a challenge.  The idea of a raised floor, cabinet-based data center is rapidly giving way to virtualization and highly expandable, easy to maintain, container farms.

The impact of cloud computing will be felt across every part of life, not least the data center which faces a degree of automation not yet seen.”

Microsoft CEO Steve Ballmer believes “the transition to the cloud <is> fundamentally changing the nature of data center deployment.” (Data Center Dynamics)

As companies such as Allied Fiber continue to develop visions of high density utility fiber ringing North America, with the added potential of dropping containerized cloud computing infrastructure along fiber routes and power distribution centers, AND the final interconnection of 4G/LTE/XYZ towers and metro cable along the main routes,the potential of creating a true 4th public utility of broadband with processing/storage capacity becomes clear.

Clouds Come of Age

Data center operators such as Equinix have traditionally provided a great product and service for companies wishing to either outsource their web-facing products into a facility with a variety of internet Service Providers or internet Exchange Points providing high performance network access, or eliminate the need for internal data center deployments through outsourcing IT infrastructure into a well-managed, secure, and reliable site.

However the industry is changing.  Companies, in particular startup companies. are finding there is no technical or business reason to manage their own servers or infrastructure, and that nearly all applications are becoming available on cloud-based SaaS (Software as a Service) hosted applications.

Whether you are developing your own virtual data center within a PaaS environment, or simply using Google Apps, Microsoft Hosted Office Applications, or other SaaS, the need to own and operate servers is beginning to make little sense.  Cloud service providers offer higher performance, flexible on-demand capacity, security, user management, and all the other features we have come to appreciate in the rapidly maturing cloud environment.

With containers providing a flexible physical apparatus to easily expand and distribute cloud infrastructure, as a combined broadband/compute utility, even cloud service providers are finding this a strong alternative to placing their systems within a traditional data center.

With the model of “flowing” cloud infrastructure along the fiber route to meet proximity, disaster recovery, or archival requirements, the container model will become a major threat to the data center industry.

What is the Data Center to Do?

Ballmer:

“A data center should be like a container – that you can put under a roof or a cover to stop it getting wet. Put in a slab of concrete, plumb in a little garden hose to keep it cool, yes a garden hose – it is environmentally friendly, connect to the network and power it up. Think of all the time that takes out of the installation.”

Data center operators need to rethink their concept of the computer room.  Building a 150 Megawatt, 2 million square foot facility may not be the best way to approach computing in the future.

Green, low powered, efficient, highly virtualized utility compute capacity makes sense, and will continue to make more sense as cloud computing and dedicated containers continue to evolve.  Containers supporting virtualization and cloud computing can certainly be secured, hardened, moved, replaced, and refreshed with much less effort than the “uber-data center.”

It makes sense, will continue to make even more sense, and if I were to make a prediction, will dominate the data delivery industry within 5~10 years.  If I were the CEO of a large data center company, I would be doing a lot of homework, with a very high sense of urgency, to get a complete understanding of cloud computing and industry dynamics.

Focus less on selling individual cabinets and electricity, and direct my attention to better understanding cloud computing and the 4th Utility of broadband/compute capacity.  I wouldn’t turn out the lights in my carrier hotel or data center quite yet, but this industry will be different in 5 years than it is today.

Given the recent stock volatility in the data center industry, it appears investors are also becoming concerned.

Expanding the 4th Utility to Include Cloud Computing

A lot has been said the past couple months about broadband as the fourth utility. The same status as roads, water, and electricity. As an American, the next generation will have broadband network access as an entitlement. But is it enough?

Carr, in “the Big Switch” discusses cloud computing being analogous to the power grid. The only difference is for cloud computing to be really useful, it has to be connected. Connected to networks, homes, businesses, SaaS, and people. So the next logical extension for a fourth utility, beyond simply referring to broadband network access as a basic right for Americans (and others around the world – it just happens as an American for purposes of this article I’ll refer to my own country’s situation), should include additional resources beyond simply delivering bits.

The “New” 4th Utility

So the next logical step is to marry cloud computing resources, including processing capacity, storage, and software as a service, to the broadband infrastructure. SaaS doesn’t mean you are owned by Google, it simply means you have access to those applications and resources needed to fulfill your personal or community objectives, such as having access to centralized e-Learning resources to the classroom, or home, or your favorite coffee shop. The network should simply be there, as should the applications needed to run your life in a wired world.

The data center and network industry will need to develop a joint vision that allows this environment to develop. Data centers house compute utility, networks deliver the bits to and from the compute utility and users. The data center should also be the interconnection point between networks, which at some point in the future, if following the idea of contributing to the 4th utility, will finally focus their construction and investments in delivering big pipes to users and applications.

Relieving the User from the Burden of Big Processing Power

As we continue to look at new home and laptop computers with quad-core processors, more than 8 gigs of memory, and terabyte hard drives, it is hard to believe we actually need that much compute power resting on our knees to accomplish the day-to-day activities we perform online. Do we need a quad core computer to check Gmail or our presentation on Microsoft Live Office?

In reality, very few users have applications that require the amounts of processing and storage we find in our personal computers. Yes, there are some applications such as gaming and very high end rendering which burn processing calories, but for most of the world all we really need is a keyboard and screen. This is what the 4th utility may bring us in the future. All we’ll really need is an interface device connecting to the network, and the processing “magic” will take place in a cloud computing center with processing done on a SaaS application.

The interface device is a desktop terminal, intelligent phone (such as an Android, iPhone, or other wired PDA device), laptop, or anything else that can display and input data.

We won’t really care where the actual storage or processing of our application occurs, as long as the application’s latency is near zero.

The “Network is the Computer” Edges Closer to Reality

Since John Gage coined those famous words while working at Sun Microsystems, we’ve been edging closer to that reality. Through the early days of GRID computing, software as a service, and virtualization – added to the rapid development of the Internet over the past 20 years, technology has finally moved compute resource into the network.

If we are honest with ourselves, we will admit that for 95% of computer users, a server-based application meets nearly all our daily office automation, social media, and entertainment needs. Twitter is not a computer-based application, it is a network-enabled server-based application. Ditto for Facebook, MySpace, LinkedIN, and most other services.

Now the “Network is the Computer” has finally matured into a utility, and at least in the United States, will soon be an entitlement for every resident. It is also another step in the globalization of our communities, as within time no person, country, or point on the earth will be beyond our terminal or input device.

That is good

%d bloggers like this: