The Bell Tolls for Data Centers
October 17, 2010 1 Comment
In the good old days (late 90s and most of the 2000s) data center operators loved selling individual cabinets to customers. You could keep your prices high for the cabinet, sell power by the “breakered amp,” and try to maximize cross connects through a data center meet me room. All designed to squeeze the most revenue and profit out of each individual cabinet, with the least amount of infrastructure burden.
Forward to 2010. Data center consolidation has become an overwhelming theme, emphasized by the US CIO Vivek Kundra’s mandate to force the US government, as the world’s largest IT user, to eliminate most of more than 1600 federal government owned and operated data centers (into about a dozen), and further promote efficiency by adopting cloud computing.
The Gold Standard of Data Center Operators hits Speed Bump
Equinix (EQIX) has a lot of reasons and explanations for their expected failure to meet 3rd quarter revenue targets. Higher than expected customer churn, reducing pricing to acquire new business, additional accounting for the Switch and Data acquisition, etc., etc., etc…
The bottom line is – the data center business is changing. Single cabinet customers are looking at hosted services as an economical and operational alternative to maintaining their own infrastructure. Face it, if you are paying for a single cabinet to house your 4 or 5 servers in a data center today, you will probably have a much better overall experience if you can migrate that minimal web-facing or customer facing equipment into a globally distributed cloud.
Likewise, cloud service providers are supporting the same level of Internet peering as most content delivery networks (CDNs) and internet Service Providers (ISPs), allowing the cloud user to relieve themselves of the additional burden of operating expensive switching equipment. The user can still decide which peering, ISP, or network provider they want on the external side of the cloud, however the physical interconnections are no longer necessary within that expensive cabinet.
The traditional data centers are beginning to experience the move to shared cloud services, as is Equinix, through higher churn rates and lower sales rates for those individual cabinets or small cages.
The large enterprise colocation users or CDNs continue to grow larger, adding to their ability to renegotiate contracts with the data centers. Space, cross connects, power, and service level agreements favor the large footprint and power users, and the result is data centers are further becoming a highly skilled, sophisticated, commodity.
The Next Generation Data Center
There are several major factors influencing data center planners today. Those include the impact of cloud computing, emergence of containerized data centers, the need for far great energy efficiency (often using PUE-Power Utilization Effectiveness) as the metric, and the industry drive towards greater data center consolidation.
Hunter Newby, CEO of Allied Fiber, strongly believes ”Just as in the last decade we saw the assembly of disparate networks in to newly formed common, physical layer interconnection facilities in major markets we are now seeing a real coordinated global effort to create new and assemble the existing disparate infrastructure elements of dark fiber, wireless towers and data centers. This is the next logical step and the first in the right direction for the next decade and beyond.”
We are also seeing data center containers popping up along the long fiber routes, adjacent to traditional breaking points such as in-line amplifiers (ILAs), fiber optic terminals (locations where carriers physically interconnect their networks either for end-user provisioning, access to metro fiber networks, or redundancy), and wireless towers.
So does this mean the data center of the future is not necessarily confined to large 500 megawatt data center farms, and is potentially something that becomes an inherent part of the transmission network? The computer is the network, the network is the computer, and all other variations in between?
For archival and backup purposes, or caching purposes, can data exist in a widely distributed environment?
Of course latency within the storage and processing infrastructure will still be dependent on physics for the near term, actually, for end user applications such as desktop virtualization, there really isn’t any particular reason that we MUST have that level of proximity… And there probably are ways we can “spoof” the systems to think they are located together, and there are a host of other reasons why we do not have to limit ourselves to a handful of “Uber Centers…”
A Vision for Future Data Centers
What if broadband and compute/storage capacity become truly insulated from the user. What if Carr’s ideas behind the Big Switch are really the future of computing as we know it, and our interface to the “compute brain” is limited to dumb devices, and that we no longer have to concern ourselves with anything other than writing software against a well publicized set of standards?
What if the next generation of Equinix is a partner to Verizon or AT&T, and Equinix builds a national compute and storage utility distributed along the fiber routes that is married to the communications infrastructure transmission network?
What if our monthly bill for entertainment, networking, platform, software, and communications is simply the record of how much utility we used during the month, or our subscription fee for the month?
What if wireless access is transparent, and globally available to all mobile and stationary terminals without reconfiguration and a lot of pain?
No more “remote hands” bills, midnight trips to the data center to replace a blown server or disk, dealing with unfriendly or unknowledgeable “support” staff, or questions of who trashed the network due to a runaway virus or malware commando…
Kind of an interesting idea.
Probably going to happen one of these days.
Now if we can extend that utility to all airlines so I can have 100% wired access, 100% of the time.