Gartner Data Center Conference Yields Few Surprises

Gartner’s 2012 Data Center Conference in Las Vegas is noted for  not yielding any major surprise.  While having an uncanny number of attendees (*the stats are not available, however it is clear they are having a very good conference), most of the sessions appear to be simply reaffirming what everybody really knows already, serving to reinforce the reality data center consolidation, cloud computing, big data, and the move to an interoperable framework will be part of everybody’s life within a few years.

Childs at Gartner ConferenceGartner analyst Ray Paquet started the morning by drawing a line at the real value of server hardware in cloud computing.  Paquet stressed that cloud adopters should avoid integrated hardware solutions based on blade servers, which carry a high margin, and focus their CAPEX on cheaper “skinless” servers.  Paquet emphasized that integrated solutions are a “waste of money.”

Cameron Haight, another Gartner analyst, fired a volley at the process and framework world, with a comparison of the value DevOps brings versus ITIL.  Describing ITIL as a cumbersome burden to organizational agility, DevOps is a culture-changer that allows small groups to quickly respond to challenges.  Haight emphasized the frequently stressful relationship between development organizations and operations organizations, where operations demands stability and quality, and development needs freedom to move projects forward, sometimes without the comfort of baking code to the standards preferred by operations – and required by frameworks such as ITIL.

Haight’s most direct slide described De Ops as being “ITIL minus CRAP.”  Of course most of his supporting slides for moving to DevOps looked eerily like an ITIL process….

Other sessions attended (by the author) included “Shaping Private Clouds,” a WIPRO product demonstration, and a data center introduction by Raging Wire.  All valuable introductions for those who are considering making a major change in their internal IT deployments, but nothing cutting edge or radical.

The Raging Wire data center discussion did raise some questions on the overall vulnerability of large box data centers.  While it is certainly possible to build a data center up to any standard needed to fulfill a specific need, the large data center clusters in locations such as Northern Virginia are beginning to appear very vulnerable to either natural, human, or equipment failure disruptions.  In addition to fulfilling data center tier classification models as presented by the Uptime Institute, it is clear we are producing critical national infrastructure which if disrupted could cause significant damage to the US economy or even social order.

Eventually, much like the communications infrastructure in the US, data centers will need to come under the observation or review of a national agency such as Homeland Security.  While nobody wants a government officer in the data center, protection of national infrastructure is a consideration we probably will not be able to avoid for long.

Raging Wire also noted that some colocation customers, particularly social media companies, are hitting up to 8kW per cabinet.  Also scary if true, and in extended deployments.  This could result in serious operational problems if cooling systems were disrupted, as the heat generated in those cabinets will quickly become extreme.  Would also be interesting if companies like Raging Wire and other colocation companies considered developing a real time CFD monitor for their data center floors allowing better monitoring and predictability than simple zone monitoring solutions.

The best presentation of the day came at the end, “Big Data is Coming to Your Data Center.”  Gartner’s Sheila Childs brought color and enthusiasm to a topic many consider, well, boring.  Childs was able to bring the value, power, and future of big data into a human consumable format that kept the audience in their seats until the end of session at 6 p.m. in the late afternoon.

Childs hit on concepts such as “dark data” within organizations, the value of big data in decision support systems (DSS), and the need for developing and recruiting skilled staff who can actually write or build the systems needed to fully exploit the value of big data.  We cannot argue that point, and can only hope our education system is able to focus on producing graduates with the basic skills needed to fulfill that requirement.

CloudGov 2012 Highlights Government Cloud Initiatives

Federal, state, and local government agencies gathered in Washington D.C. on 16 February to participate in Cloud/Gov 2012 held at the Westin Washington D.C.  With Keynotes by David L. McLure, US General Services Administration, and Dawn Leaf, NIST, vendors and government agencies were brought up to date on federal cloud policies and initiatives.

Of special note were updates on the FedRAMP program (a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services) and NIST’s progress on standards.  “The FedRAMP process chart looks complicated” noted McLure, “however we are trying to provide support needed to accelerate the (FedRAMP vendor) approval process.

McLure also provided a roadmap for FedRAMP implementation, with FY13/Q2 targeted for full operation and FY14 planned for sustaining operations.

In a panel focusing on government case studies, David Terry from the Department of Education commented that “mobile phones are rapidly becoming the access point (to applications and data) for young people.”  Applications (SaaS) should be written to accommodate mobile devices, and “auto-adjust to user access devices.”

Tim Matson from DISA highlighted the US Department of Defense’s Forge.Mil initiative providing an open collaboration community for both the military and development community to work together in rapidly developing new applications to better support DoD activities.  While Forge.Mil has tighter controls than standard GSA (US General Services Administration)  standards, Matson emphasized “DISA wants to force the concept of change into the behavior of vendors.” Matson continued explaining that Forge.Mil will reinforce “a pipeline to support continuous delivery” of new applications.

While technology and process change topics provided a majority of  discussion points, mostly enthusiastic, David Mihalchik from Google advised “we still do not know the long term impact of global collaboration.  The culture is changing, forced on by the idea of global collaboration.”

Other areas of discussion among panel members throughout the day included the need for establishing and defining service level agreements (SLAs) for cloud services.  Daniel Burton from SalesForce.Com explained their SLAs are broken into two categories, SLAs based on subscription services, and those based on specific negotiations with government customers.   Other vendors took a stab at explaining their SLAs, without giving specific examples of their SLAs, leaving the audience without a solid answer.

NIST Takes the Leadership Role

The highlight of the day was provided by Dawn Leaf, Senior Executive for Cloud Computing with NIST.  Leaf provided very logical guidance for all cloud computing stakeholders, including vendors and users.

“US industry requires an international standard to ensure (global) competitiveness” explained Leaf.  In the past US vendors and service providers have developed standards which were not compatible with European and other standards, notably in wireless telephony, and one of NIST’s objectives is to participate in developing a global standard for cloud computing to prevent this possibility in cloud computing.

Cloud infrastructure and SaaS portability is also a high interest item for NIST.  Leaf advises that “we can force vendors into demonstrating their portability.  There are a lot of new entries in the business, and we need to force the vendors into proving their portability and interoperability.”

Leaf also reinforced the idea that standards are developed in the private sector.  NIST provides guidance and an architectural framework for vendors and the private sector to use as reference when developing those specific technical standards.  However leaf also had one caution for private industry, “industry should try to map their products to NIST references, as the government is not in a position to wait” for extended debates on the development of specific items, when the need for cloud computing development and implementation is immediate.

Further information on the conference, with agendas and participants is available at

5 Data Center Technology Predictions for 2012

2011 was a great year for technology innovation.  The science of data center design and operations continued to improve, the move away from mixed-use buildings used as data centers continued, the watts/sqft metric took a second seat to overall kilowatts available to a facility or customer, and the idea of compute capacity and broadband as a utility began to take its place as a basic right of citizens.

However, there are 5 areas where we will see additional significant advances in 2012.

1.  Data Center Consolidation.  The US Government admits it is using only 27% of its overall available compute power.  With 2094 data centers supporting the federal government (from the CIO’s 25 Point Plan  to Reform Fed IT Mgt), the government is required to close at least 800 of those data centers by 2015.

Data Center ConstructionThe lesson is not lost on state and local governments, private industry, or even internet content providers.  The economics of operating a data center or server closet, whether in costs of real estate, power, hardware, in addition to service and licensing agreements, are compelling enough to make even the most fervent server-hugger reconsider their religion.

2.  Cloud Computing.  Who doesn’t believe cloud computing will eventually replace the need for a server closets, cabinets, or even small cages in data centers?  The move to cloud computing is as certain as the move to email was in the 1980s. 

Some IT managers and data owners hate the idea of cloud computing, enterprise service busses, and consolidated data.  Not so much an issue of losing control, but in many cases because it brings transparency to their operation.  If you are the owner of data in a developing country, and suddenly everything you do can be audited by a central authority – well it might make you uncomfortable…

A lesson learned while attending a  fast pitch contest during late 2009 in Irvine, CA…  An enterprising entrepreneur gave his “pitch” to a panel of investment bankers and venture capital representatives.  He stated he was looking for a $5 million investment in his startup company. 

A panelist asked what the money was for, and the entrepreneur stated “.. and $2 million to build out a data center…”  The panelist responded that 90% of new companies fail within 2 years.  Why would he want to be stuck with the liability of a data center and hardware if the company failed? The gentleman further stated, “don’t waste my money on a data center – do the smart thing, use the Amazon cloud.”

3.  Virtual Desktops and Hosted Office Automation.  How many times have we lost data and files due to a failed hard drive, stolen laptop, or virus disrupting our computer?  What is the cost or burden of keeping licenses updated, versions updated, and security patches current in an organization with potentially hundreds of users?  What is the lead time when a user needs a new application loaded on a computer?

From applications as simple as Google Docs, to Microsoft 365, and other desktop replacement applications suites, users will become free from the burden of carrying a heavy laptop computer everywhere they travel.  Imagine being able to connect your 4G/LTE phone’s HDMI port to a hotel widescreen television monitor, and be able to access all the applications normally used at a desktop.  You can give a presentation off your phone, update company documents, or nearly any other IT function with the only limitation being a requirement to access broadband Internet connections (See # 5 below).

Your phone can already connect to Google Docs and Microsoft Live Office, and the flexibility of access will only improve as iPads and other mobile devices mature.

The other obvious benefit is files will be maintained on servers, much more likely to be backed up and included in a disaster recovery plan.

4.  The Science of Data Centers.  It has only been a few years since small hosting companies were satisfied to go into a data center carved out of a mixed-use building, happy to have access to electricity, cooling, and a menu of available Internet network providers.  Most rooms were Data Center Power Requirementsdesigned to accommodate 2~3kW per cabinet, and users installed servers, switches, NAS boxes, and routers without regard to alignment or power usage.

That has changed.  No business or organization can survive without a 24x7x265 presence on the Internet, and most small enterprises – and large enterprises, are either consolidating their IT into professionally managed data centers, or have already washed their hands of servers and other IT infrastructure.

The Uptime Institute, BICSI, TIA, and government agencies have begun publishing guidelines on data center construction providing best practices, quality standards, design standards, and even standards for evaluation.  Power efficiency using metrics such as the PUE/DCiE provide additional guidance on power management, data center management, and design. 

The days of small business technicians running into a data center at 2 a.m. to install new servers, repair broken servers, and pile their empty boxes or garbage in their cabinet or cage on the way out are gone.  The new data center religion is discipline, standards, discipline, and security. 

Electricity is as valuable as platinum, just as cooling and heat are managed more closely than inmates at San Quentin.  While every other standards organization is now offering certification in cabling, data center design, and data center management, we can soon expect universities to offer an MS or Ph.D in data center sciences.

5.  The 4th Utility Gains Traction.  Orwell’s “1984” painted a picture of pervasive government surveillance, and incessant public mind control (Wikipedia).  Many people believe the Internet is the source of all evil, including identity theft, pornography, crime, over-socialization of cultures and thoughts, and a huge intellectual time sink that sucks us into the need to be wired or connected 24 hours a day.

Yes, that is pretty much true, and if we do not consider the 1000 good things about the Internet vs. each 1 negative aspect, it might be a pretty scary place to consider all future generations being exposed and indoctrinated.  The alternative is to live in a intellectual Brazilian or Papuan rain forest, one step out of the evolutionary stone age.

The Internet is not going away, unless some global repressive government, fundamentalist religion, or dictator manages to dismantle civilization as we know it.

The 4th utility identifies broadband access to the ‘net as a basic right of all citizens, with the same status as roads, water, and electricity.  All governments with a desire to have their nation survive and thrive in the next millennium will find a way to cooperate with network infrastructure providers to build out their national information infrastructure (haven’t heard that term since Al Gore, eh?).

Without a robust 4th utility, our children and their children will produce a global generation of intellectual migrant workers, intellectual refugees from a failed national information sciences vision and policy.

2012 should be a great year.  All the above predictions are positive, and if proved true, will leave the United States and other countries with stronger capacities to improve their national quality of life, and bring us all another step closer.

Happy New Year!

Evaluating Public Cloud Computing Performance with CloudHarmony

With dozens of public cloud service providers on the market, offering a wide variety of services, standards, SLAs, and options, how does an IT manager make an informed decision on which provider to use?  Is it time in business? Location? Cost? Performance?

Pacific-Tier Communications met up with Jason Read, owner of CloudHarmony, a company specializing in benchmarking the cloud, at Studio City, California, on 25 October.  Read understands how confusing and difficult it is to evaluate different service providers without an industry-standard benchmark.  In fact, Read started CloudHarmony based on his own frustrations as a consultant helping a client choose a public cloud service provider, while attempting to sort through vague cloud resource and service terms used by industry vendors.

“Cloud is so different. Vendors describe resources using vague terminology like 1 virtual CPU, 50 GB storage. I think cloud makes it much easier for providers to mislead. Not all virtual CPUs and 50 GB storage volumes are equal, not by a long shot, but providers often talk and compare as if they are. It was this frustration that led me to create CloudHarmony” explained Read.

So, Read went to work creating a platform for not only his client, but also other consultants and IT managers that would give a single point of testing public cloud services not only within the US, but around the world.    Input to the testing platform came from aggregating more than 100 testing benchmarks and methodologies available to the public.  However CloudHarmony standardized on CentOS/RHEL Linux as an operating system  which all cloud vendors support, “to provide as close to an apples to apples comparison as possible” said Read.

Customizing a CloudHarmony Benchmark Test

Cloud harmony Configuration

Setting up a test is simple.  You go to the CloudHarmony Benchmarks page, select the benchmarks you would like to run, the service providers you would like to test, configurations of virtual options within those service providers, geographic location, and the format of your report.

Figure 1.  Benchmark Configuration shows a sample report setup.

“CloudHarmony is a starting point for narrowing the search for a public cloud provider” advised Read.  “We provide data that can facilitate and narrow the selection process. We don’t have all of the data necessary to make a decision related to vendor selection, but I think it is a really good starting point.

Read continued “for example, if a company is considering cloud for a very CPU intensive application, using the CPU performance metrics we provide, they’d quickly be able to eliminate vendors that utilize homogenous infrastructure with very little CPU scaling capabilities from small to larger sized instance.”

Cloud vendors listed in the benchmark directory are surprisingly open to CoudHarmony testing.  “We don’t require or accept payment from vendors to be listed on the site and included in the performance analysis” mentioned Read.  “We do, however, ask that vendors provide resources to allow us to conduct periodic compute benchmarking, continual uptime monitoring, and network testing.”

When asked if cloud service providers contest or object to CloudHarmony’s methodology or reports, Read replied “not frequently. We try to be open and fair about the performance analysis. We don’t recommend one vendor over another. I’d like CloudHarmony to simply be a source of reliable, objective data. The CloudHarmony performance analysis is just a piece of the puzzle, users should also consider other factors such as pricing, support, scalability, etc.”

Cloud Harmony Benchmark Report

During an independent trial of CloudHarmony’s testing tool, Pacific-Tier Communications selected the following parameters to complete a sample CPU benchmark:

  • CPU Benchmark (Single Threaded CPU)
  • GMPbench math library
  • Cloud Vendor – AirVM (MO/USA)
  • Cloud Vendor – Amazon EC2 (CA/USA)
  • Cloud Vendor – Bit Refinery Cloud Hosting (CO/USA)
  • 1/2/4 CPUs
  • Small/Medium/Large configs
  • Bar Chart and Sortable Table report

The result, shown above in Figure 2., shows a test result including performance measured against each of the above parameters.  Individual tests for each parameter are available, allowing a deeper look into the resources used and test results based on those resources.

In addition, as shown in Figure 3., CloudHarmony provides a view providing uptime statistics of dozens of cloud service providers over a period of one year.  Uptime statistics showed a range (at the time of this article) between 98.678% availability to 100% availability, with 100% current uptime (27 October).

Cloud Service Provider Status

Who Uses CloudHarmony Benchmark Testing?

While the average user today may be in the cloud computing industry, likely vendors eager to see how their product compares against competitors, Read targets CloudHarmony’s product to “persons responsible for making decisions related to cloud adoption.”  Although he admits that today most users of the site lean towards the technical side of the cloud service provider industry.

Running test reports on cloud harmony is based on a system of purchasing credits.  Read explained “we have a system in place now where the data we provide is accessible via the website or web services – both of which rely on web service credits to provide the data. Currently, the system is set up to allow 5 free requests daily. For additional requests, we sell web service credits where we provide a token that authorizes you to access the data in addition to the 5 free daily requests.”

The Bottom Line

“Cloud is in many ways a black box” noted Read.  “Vendors describe the resources they sell using sometimes similar and sometimes very different terminology. It is very difficult to compare providers and to determine performance expectations. Virtualization and multi-tenancy further complicates this issue by introducing performance variability. I decided to build CloudHarmony to provide greater transparency to the cloud.”

And to both vendors and potential cloud service customers, provide an objective, honest, transparent analysis of commercially available public cloud services.

Check out CloudHarmony and their directory of services at


University of Washington Launches Certificate in Cloud Computing

In an online “blogger” press conference on 5 August, Erik Bansleben, Ph. D., Program Development Director, Academic Programs at the University of Washington outlined a new certificate program offered by the university in Cloud Computing.  The program is directed towards “college level and career professionals” said Bansleben, adding “all courses are practical in approach.”

Using a combination of classroom and online instruction, the certificate program will allow flexibility accommodating remote students in a virtual extension of the residence program.  While not offering formal academic credit for the program, the certificates are “well respected locally by employers, and really tend to help students a fair amount in getting  internships, getting new jobs, or advancing in their current jobs.”

The Certificate in Cloud Computing is broken into three courses, including:

  • Introduction to Cloud Computing
  • Cloud Computing in Action
  • Scalable & Data-Intensive Computing in the Cloud

The courses are taught by instructors from both the business community and the University’s Department of Computer Science & Engineering.  Topics within each course are designed to provide not only an overview of the concepts and value of cloud computing in a business sense, but also includes project work and assignments.

To bring more relevance to students, Bansleben noted “part of the courses will be based on student backgrounds and student interests.”   Dr. Bill Howe, instructor for the “Scalable & Data-Intensive Computing in the Cloud” course added “nobody is starting a company without being in the clouds.”   With the program covering topical areas such as:

  • Cloud computing models: software as a service (SaaS), platform as a service (PaaS), infrastructure as a service (laaS) and database as a service
  • Market overview of cloud providers
  • Strategic technology choices and development tools for basic cloud application building
  • Web-scale analytics and frameworks for processing large data sets
  • Database query optimization
  • Fault tolerance and disaster recovery

Students will walk away with a solid background of cloud computing and how it will impact future planning for IT infrastructure.  In addition, each course will invite guest speakers from cloud computing vendors and industry leaders to present actual case studies to further apply context to course theory.  Bansleben reinforced the plan to provide students with specific “use cases for or against using cloud services vs. using your own hosted services.”

Not designed as a simple high level overview of cloud computing concepts, the program does require students to have a  background in IT networks and protocols, as well as familiarity with file manipulation in system environments such as Linux.  Bansleben stated that “some level of programming experience is required” as a prerequisite to participate in the certificate program.

The Certificate in Cloud Computing program starts on 10 October, and will cost students around $2,577 for the entire program.  The program is limited to 40 students, including both resident and online.  For more information on University of Washington certificate programs or the Certificate in Cloud Computing contact:

Erik Bansleben, Program Development Director

Charting the Future of Small Data Centers

Every week a new data center hits the news with claims of greater than 100,000 square feet at >300 watts/square foot, and levels of security rivaling that of the NSA.  Hot and cold aisle containment, marketing people slinging terms such as PUE (Power Utilization Efficiency), modular data centers, containers, computational fluid dynamics, and outsourcing with such smoothness and velocity that even used car salesmen regard them in complete awe.

Don’t get me wrong, outsourcing your enterprise data center or Internet site into a commercial data center (colocation), or cloud computing-supported virtual data center, is not a bad thing.  As interconnections between cities are reinforced, and sufficient levels of broadband access continues to find its way to both business and residences throughout the country – not to mention all the economic drivers such as OPEX, CAPEX, and flexibility in cloud environments, the need or requirement to maintain an internal data center or server closet makes little sense.

Small Data Centers Feel Pain

Small Data Center Cabinet LineupIn the late 1990s data center colocation started to develop roots.  The Internet was becoming mature, and eCommerce, entertainment, business-to-business, academic, government IT operations found proximity to networks a necessity, and the colocation industry formed to meet the opportunity stimulated by Internet adoption.

Many of these data centers were built in “mixed use” buildings, or existing properties in city centers which were close to existing telecommunication infrastructure.  In cities such as Los Angeles, the commercial property absorption in city centers was at a low, providing very available and affordable space for the emerging colocation industry.

The power densities in those early days was minimal, averaging somewhere around 70 watts/square foot.  Thus, equipment installed in colocation space carved out of office buildings was manageable through over-subscribing air conditioning within the space.  The main limitation in the early colocation days was floor loading within an office space, as batteries and equipment cabinets within colocation areas would stretch building structures to their limits.

As the data center industry, and Internet content hosting continued to grow, the amount of equipment being placed in mixed-use building colocation centers finally started reaching a breaking point in ~2005.  The buildings simply could not support the requirement for additional power, cooling, backup generators needed to support the rapidly developing data center market.

Around that time a new generation of custom-built data center properties began construction, with very little limitation on either weight, power consumption, cooling requirements, or creativity in custom designs of space to gain greatest PUE factors and move towards “green” designs.

The “boom town” inner-city data centers then began experiencing difficulty attracting new customers and retaining their existing customer base.  Many of the “dot com” customers ran out of steam during this period, going bankrupt or abandoning their cabinets and cages, while new data center customers fit into a few categories:

  • High end hosting and content delivery networks (CDNs), including cloud computing
  • Enterprise outsourcing
  • Telecom companies, Internet Service Providers, Network Service Providers

With few exceptions these customers demanded much higher power densities, physical security, redundancy, reliability, and access to large numbers of communication providers.  Small data centers operating out of office building space find it very difficult to meet demands of high end users, and thus the colocation community began a migration the larger data centers.  In addition, the loss of cash flow from “dot com” churn forced many data centers to shut down, leaving much of the small data center industry in ruins.

Data Center Consolidation and Cloud Computing Compounds the Problem

New companies are finding it very difficult to justify spending money on physical servers and basic software licenses.  If you are able to spool up servers and storage on demand through a cloud service provider – why waste the time and money trying to build your own infrastructure – even infrastructure outsourced or colocated in a small data center?  It is simply a bad investment for most companies to build data centers – particularly if the cloud service provider has inherent disaster recovery and backup utility.

Even existing small eCommerce sites hitting refresh cycles for their hardware and software find it difficult to continue one or two cabinet installations within small data centers when they can accomplish the same thing, for a lower cost, and receive higher performance refreshing in a cloud service provider.

Even the US Government, as the world’s largest IT user has turned its back on small data center installations throughout federal government agencies.

The goals of the Federal Data Center Consolidation Initiative are to assist agencies in identifying their existing data center assets and to formulate consolidation plans that include a technical roadmap and consolidation targets. The Initiative aims to address the growth of data centers and assist agencies in leveraging best practices from the public and private sector to:

  • Promote the use of Green IT by reducing the overall energy and real estate footprint of government data centers;
  • Reduce the cost of data center hardware, software and operations;
  • Increase the overall IT security posture of the government; and,
  • Shift IT investments to more efficient computing platforms and technologies.

To harness the benefits of cloud computing, we have instituted a Cloud First policy. This policy is intended to accelerate the pace at which the government will realize the value of cloud computing by requiring agencies to evaluate safe, secure cloud computing options before making any new investments. (Federal Cloud Computing Strategy)
Adding similar initiatives in the UK, Australia, Japan, Canada, and other countries to eliminate inefficient data center programs, and the level of attention being given to these initiatives in the private sector, it is a clear message that inefficient data center installations may become an exception.

Hope for Small Data Centers?

Absolutely!  There will always be a compelling argument for proximity of data and applications to end users.  Whether this be enterprise data, entertainment, or disaster recovery and business continuity, there is a need for well built and managed data centers outside of the “Tier 1” data center industry.

dc2However, this also means data center operators will need to upgrade their existing facilities to meet the quality and availability standard/requirements of a wired global network-enabled community.  Internet and applications/data access is no longer a value-added service, it is critical infrastructure.

Even the most “shoestring” budget facility will need to meet basic standards published by BICSI (Ex BICSI 2010-002), the Telecom Industry Association (TIA-942), or even private organizations such as the Uptime Institute.

With the integration of network-enabled everything into business and social activities, investors and insurance companies are demanding audits of data centers, using audit standards such as SAS70 to provide confidence their investments are protected with satisfactory operational process and construction.

Even if a data center cannot provide 100,000 square feet of 300 watt space, but can provide the local market with adequate space and quality to meet customer needs, there will be a market.

This is particularly true for customers who require flexibility in service agreements, custom support, a large selection of telecommunications companies available within the site, and have a need for local business continuity options.  Hosting a local Internet exchange point or carrier Ethernet exchange within the facility would also make the space much more attractive.

The Road Ahead

Large data centers and cloud service providers are continuing to expand, developing their options and services to meet the growing data center consolidation and virtualization trend within both enterprise and global Internet-facing community.  This makes sense, and will provide a very valuable service for a large percentage of the industry.

Small data centers in Tier 1 cities (in the US that would include Los Angeles, the Northern California Bay Area, New York, Northern Virginia/DC/MD) are likely to find difficulty competing with extremely large data centers – unless they are able to provide a very compelling service such as hosting a large carrier hotel (network interconnection point), Internet Exchange Point, or Cloud Exchange.

However, there will always be a need for local content delivery, application (and storage) hosting, disaster recovery, and network interconnection.  Small data centers will need to bring their facilities up to international standards to remain competitive, as their competition is not local, but large data centers in Tier 1 cities.

5 Cloud Computing Predictions for 2011

  1. ESBaaS Will Emerge in Enterprise Clouds.  Enterprise service bus as a service will begin to emerge within enterprise clouds to allow common messaging within applications among different organizational units.  This will further support standardization within an enterprise, as well as reduce lead times for applications development.
  2. Enterprise Cloud Computing will Accelerate Data Center Consolidation.  As enterprises and governments continue to deal with the cost of operating individual data centers, consolidation will become a much more important topic.  As the consolidation process is planned, further migration to cloud computing and virtualized environments will become very attractive – if not critical – to all organizations.
  3. Desktop Virtualization.   As we become more comfortable with Google Apps, Microsoft Office 365, and other desktop replacement environments, the need for high-powered desktop workstations will be reduced to power users.  In addition to the obvious attraction for better data protection and disaster recovery, the cost of expensive workstations and local application licenses makes little sense.  The first migration will be for those who are primarily connected via an organizational LAN, with road warriors and mobile users following as broadband becomes more ubiquitous.
  4. SME Data Center Outsourcing into Public Clouds.  Small companies  requiring routine data center support, including office automation, servers, finance applications, and web presence, will find it difficult to justify installing their own equipment in a private or public colocation center.  In fact, it is unlikely savvy investors will support start up companies planning to operate their own data center, unless they are in an industry considered a very clear exception to normal IT requirements.
  5. Cloud Computing and Cloud Storage will Look to PODs and Containers.  Microsoft and Google have proven the concept on a large scale, now the rest of the cloud computing and data center industry will take notice and begin to consider compute and storage capacity as a utility.  As a utility the compute, storage, switching, and communications components will take advantage of greater efficiencies and design flexibility of moving beyond the traditional data center concrete.  This will further support the idea of distributed cloud computing, portability, cloud exchanges, and cloud spot markets in 2012…

Cloud Computing Wish List for 2011

2010 was a great year for cloud computing.  The hype phase of cloud computing is closing in on maturity, as the message has finally hit awareness of nearly all in the Cxx tier.  And for good reason.  The diffusion of IT-everything into nearly every aspect of our lives needs a lot of compute, storage, and network horsepower.

imageAnd,… we are finally getting to the point cloud computing is no longer explained with exotic diagrams on a white board or Powerpoint presentation, but actually something we can start knitting together into a useful tool.

The National Institute of Standards and Technology (NIST) in the United States takes cloud computing seriously, and is well on the way to setting standards for cloud computing, at least in the US.  The NIST definitions of cloud computing are already an international reference, and as that taxonomy continues to baseline vendor cloud solutions, it is a good sign we are  on the way to product maturity.

Now is the Time to Build Confidence

Unless you are an IY manager in a bleeding-edge technology company, there is rarely any incentive to be in the first-mover quadrant of technology implementation.  The intent of IT managers is to keep the company’s information secure, and provide the utilities needed to meet company objectives.  Putting a company at risk by implementing “cool stuff” is not the best career choice.

However, as cloud computing continues to mature, and the cost of operating an internal data center continues to rise (due to the cost of electricity, real estate, and equipment maintenance), IT managers really have no choice – they have to at least learn the cloud computing technology and operations environment.  If for no other reason than their Cxx team will eventually ask the question of “what does this mean to our company?”

An IT manager will need to prepare an educated response to the Cxx team, and be able to clearly articulate the following:

  • Why cloud computing would bring operational or competitive advantage to the company
  • Why it might not bring advantage to the company
  • The cost of operating in a cloud environment versus a traditional data center environment
  • The relationship between data center consolidation and cloud computing
  • The advantage or disadvantage of data center outsourcing and consolidation
  • The differences between enterprise clouds, public clouds, and hybrid clouds
  • The OPEX/CAPEX comparisons of running individual servers versus virtualization, or virtualization within a cloud environment
  • Graphically present and describe cloud computing models compared to traditional models, including the cost of capacity

Wish List Priority 1 – Cloud Computing Interoperability

It is not just about vendor lock-in.  it is not just about building a competitive environment.  it is about having the opportunity to use local, national, and international cloud computing resources when it is in the interest of your organization.

Hybrid clouds are defined by NIST, but in reality are still simply a great idea.  The idea of being able to overflow processing from an enterprise cloud to a public cloud is well-founded, and in fact represents one of the basic visions of cloud computing.  Processing capacity on demand.

But let’s take this one step further.  The cloud exchange.  We’ve discussed this for a couple of years, and now the technology needs to catch up with the concept.

If we can have an Internet Exchange, a Carrier Ethernet Exchange, and a telephone exchange – why can’t we have a Cloud Exchange?  or a single one-stop-shop for cloud compute capacity consumers to go to access a spot market for on-demand cloud compute resources?

Here is one idea.  Take your average Internet Exchange Point, like Amsterdam (AMS-IX), Frankfurt (DE-CIX), Any2, or London (LINX) where hundreds of Internet networks, content delivery networks, and enterprise networks come together to interconnect at a single point.  This is the place where the only restriction you have for interconnection of networks and resources is the capacity of your port/s connecting you to the exchange point.

Most Internet Exchange Points are colocated with large data centers, or are in very close proximity to large data centers (with a lot of dark fiber connecting the facilities).  The data centers manage most of the large content delivery networks (CDNs) facing the Internet.  Many of those CDNs have irregular capacity requirements based on event-driven, seasonal, or other activities.

The CDN can either build their colocation capacity to meet the maximum forecast requirements of their product, or they could potentially interconnect with a colocated cloud computing company for overflow capacity – at the point of Internet exchange.

The cloud computing companies (with the exception of the “Big 3”), are also – yes, in the same data centers as the CDNs.  Ditto for the enterprise networks choosing to either outsource their operations into a data center – or outsource into a public cloud provider.

Wish List – Develop a cloud computing exchange colocated, or part of large Internet Exchange Points.

Wish List Extra Credit – Switch vendors develop high capacity SSDs that fit into switch slots, making storage part of the switch back plane.

Simple and Secure Disaster Recovery Models

Along with the idea of distributed cloud processing, interoperability, and on-demand resources comes the most simple of all cloud visions – disaster recovery.

One of the reasons we all talk cloud computing is the potential for data center consolidation and recovery of CAPEX/OPEX for reallocation into development and revenue-producing activities.

However, with data center consolidation comes the equally important task of developing strong disaster recovery and business continuity models.  Whether it be through producing hot standby images of applications and data, simply backing up data into a remote (secure) location, or both, disaster recovery still takes on a high priority for 2011.

You might state “disaster recovery has been around since the beginning of computing, with 9 track tapes copies and punch cards – what’s new?”

What’s new is the reality of disaster recovery is most companies and organizations still have no meaningful disaster recovery plan.  There may be a weekly backup to tape or disk, there may even be the odd company or organization with a standby capability that limits recovery time and recovery point objectives to a day or two.  But let’s be honest – those are the exceptions.

Having surveyed enterprise and government users over the past two years, we have noticed that very, very few organizations with paper disaster recovery plans actually implement their plans in practice.  This includes many local and state governments within the US (check out some of the reports published by the National Association of State CIOs/NASCIO if you don’t believe this statement!).

Wish List Item 2 – Develop a simple, really simple and cost effective disaster recovery model within the cloud computing industry.  Make it an inherent part of all cloud computing products and services.  Make it so simple no IT manager can ever again come up with an excuse why their recovery point and time objectives are not ZERO.

Moving Towards the Virtual Desktop

Makes sense.  If cloud computing brings applications back to the SaaS model, and communications capacity and bandwidth are bringing delays –even on long distance connections, to the point us humans cannot tell if we are on a LAN or a WAN, then let’s start dumping high cost works stations.

Sure, that 1% of the IT world using CAD, graphics design, and other funky stuff will still need the most powerful computer available on the market, but the rest of us can certainly live with hosted email, other unified communications, and office automation applications.  You start your dumb terminal with the 30” screen at 0800, and log off at 1730.

If you really need to check email at night or on the road, your 3G->4G smart phone or netbook connection will provide more than adequate bandwidth to connect to your host email application or files.

This supports disaster recovery objectives, lowers the cost of expensive workstations, and allows organizations to regain control of their intellectual property.

With applications portability, at this point it makes no difference if you are using Google Apps, Microsoft 365, or some other emerging hosted environment.

Wish List Item 3 – IT Managers, please consider dumping the high end desktop workstation, gain control over your intellectual property, recover the cost of IT equipment, and standardize your organizational environment.

More Wish List Items

Yes, there are many more.  But those start edging towards “cool.”  We want to concentrate on those items really needed to continue pushing the global IT community towards virtualization.

The Argument Against Cloud Computing

As a cloud computing evangelist there is nothing quite as frustrating, and challenging, as the outright rejection of anything related to data center consolidation, data center outsourcing, or use of shared, multi-tenant cloud-based resources.  How is it possible anybody in the late stages of 2010 can possibly deny a future of VDIs and virtual data centers?

Actually, it is fairly easy to understand.  IT managers are not graded on their ability to adopt the latest “flavor of the day” technology, or adherence to theoretical concepts that look really good in Powerpoint, but in reality are largely untested and still in the development phase.

Just as a company stands a 60% chance of failure if they suffer disaster without a recovery or continuity plan, moving the corporate cookies too quickly into a “concept” may be considered just as equally irresponsible to a board of directors, as the cost of failure and loss of data remains extremely high.

The Burden Carried by Thought Leaders and Early Adopters

Very few ideas or visions are successful if kept secret.  Major shifts in technology or business process (including organizational structure) require more than exposure to a few white papers, articles, or segments on the “Tech Hour” of a cable news station.

Even as simple and routine as email is today, during the 1980s it was not fully understood, mistrusted, and even mocked by users of “stable” communication systems such as Fax, TELEX, and land line telephones. in 2010 presidents of the world’s most powerful nations are cheerfully texting, emailing, and micro-blogging their way through the highest levels of global diplomacy.

It takes time, experience, tacit knowledge, and the trend your business, government, or social community is moving forward at a rate that will put you on the outside if the new technology or service is not adopted and implemented.

The question is, “how long will it take us to get to the point we need to accept outsourcing our information technology services and infrastructure, or face a higher risk of not being part of our professional or personal community?”

E-Mail first popped up in the late 1970s, and never really made it mainstream until around the year 2000.  Till then, when executives did use email, it was generally transcribed from written memos and types in by a secretary.  Until now, we have gradually started learning about cloud computing through use of social media, hosted public mail systems, and some limited SaaS applications. 

Perhaps at the point us evangelist types, as a community, are able to start clearly articulating the reality that cloud computing has already planted its seeds in nearly every Internet-enabled computer, smart phone, or smart devices life, the vision of cloud computing will still be far too abstract for most to understand. 

And this will subsequently reinforce the corporate and organizational mind’s natural desire to back off until others have developed the knowledge base and best-practices needed to bring their community to the point implementing and IT outsourcing strategy will be in their benefit, and not be a step in their undoing.

In fact, we need to train the IT community to be critical, to learn more about cloud computing, and question their role in the future of cloud computing.  How else can we expect the knowledge level to rise to the point IT managers will have confidence in this new service technology?

And You Thought is was About Competitive Advantage?

Yes, the cloud computing bandwagon is overflowing with snappy topics such as:

  • Infrastructure agility
  • Economies of scale
  • Enabling technology
  • Reduced provisioning cycles
  • Relief from capital expense
  • better disaster recovery
  • Capacity on demand
  • IT as a Service
  • Virtual everything
  • Publics, privates, and hybrids
  • Multi-resource variability
  • Pay as you go

Oh my, we will need a special lexicon just to wade through the new marketing language of the main goals of cloud computing, which in our humble opinion are:

  • Data center consolidation
  • Disaster recovery
  • IT as a Service
    Cloud computing itself will not make us better managers and companies.  Cloud computing will serve as a very powerful tool to let us more efficiently, more quickly, and more effectively meet our organizational goals.  Until we have he confidence cloud computing will serve that purpose, it is probably a fairly significant risk to jump on the great marketing data dazzling us on Powerpoint slides and power presentations.

We will Adopt Cloud Computing, or Something Like It

Now to recover my cloud computing evangelist enthusiasm.  I do deeply believe in the word – the word of cloud computing as a utility, as a component of broadband communications, as all of the bullets listed above.  it will take time, and I warmly accept the burden of responsibility to further codify the realities of cloud computing, the requirements we need to fulfill as an industry to break out of the “first mover phase,” and the need to establish a roadmap for companies to shift their IT operations to a/the cloud.  

Just as with email, it is just one of those things you know is going to happen.  We knew it in the early days of GRID computing, and we know it now.  Let’s focus our discussion on cloud computing to more of a “how” and “when” conversation, rather then a “wow” and “ain’t it cool.” conversation. 

Now as I dust off an circa 1980 set of slides discussing the value of messaging, and how it would support one-to-one, one-to-many, and many-to-many forms of interactive and non-interactive communications, it is time for us to provide a similar Introduction to Cloud. 

Get the pulpit ready

%d bloggers like this: