Disaster Recovery as a First Step into Cloud Computing

fire-articleOrganizations see the benefits of cloud computing, however many are simply mortified at the prospect of re-engineering their operations to fit into existing cloud service technology or architectures.  So how can we make the first step? 

We (at Pacific-Tier Communications) have conducted 103 surveys over the past few months in the US, Canada, Indonesia, and Moldova on the topic of cloud computing.  The surveys targeted both IT managers in commercial companies, as well as within government organizations.

The survey results were really no different than most – IT managers in general find cloud computing and virtualization an exciting technology and service development, but they are reluctant to jump into cloud for a variety of reas0ns, including:

  • Organization is not ready (including internal politics)
  • No specific budget
  • Applications not prepared for migration to cloud
  • and lots of other reasons

The list and reasoning for not going into cloud will continue until organizations get to the point they cannot avoid the topic, probably around the time of a major technology refresh.

Disaster Recovery is Different

The surveys also indi9cated another consistent trend – most organizations still have no formal disasters recovery plan.  This is particularly common within government agencies, including those state and local governments surveyed in the United States.

IT managers in many government agencies had critical data stored on laptop computers, desktops, or in most cases their organization operating data in a server closet with either no backup, or onsite backup to a tape system with no offsite storage.

In addition, the central or controlling government/commercial  IT organization had either no specific policy for backing up data, or in a worst case had no means of backing up data (central or common storage system) available to individual branch or agency users.

When asked if cloud storage, or even dedicated storage became available with reasonable technical ease, and affordable cost, the IT managers agreed, most enthusiastically, that they would support development of automated backup and individual workstation backup to prevent data loss and reinforce availability of applications.

Private or Public – Does it Make a Difference?

While most IT managers are still worshiping at the shrine of IT Infrastructure Control, there are cracks appearing in the “Great Walls of IT Infrastructure.”  With dwindling IT budgets, and diskexplosive user and organization IT utility demand, IT managers are slowly realizing the good old days of control are nearly gone.

And to add additional tarnish to pride, the IT managers are also being faced with the probability at least some of their infrastructure will find its way into public cloud services, completely out of their domain.

On the other hand, it is becoming more and more difficult to justify building internal infrastructure when the quality, security, and utility of public services often exceeds that which can be built internally.  Of course there are exceptions to every rule, which in our discussion includes requirements for additional security for government sensitive or classified information.

That information could include military, citizen identification data, or other similar information that while securable through encryption and partition management, politically(particularly in cases where the data could possible leave the borders of a country) may not be possible to extend beyond the walls of an internal data center.

For most other information, it is quickly becoming a simple exercise in financial planning to determine whether or not a public storage service or internal storage service makes more sense. 

The Intent is Disaster Recovery and Data Backup

Getting back to the point, with nearly all countries, and in particular central government properties, being on or near high capacity telecom carriers and networks, and the cost of bandwidth plummeting, the excuses for not using network-based off-site backups of individual and organization data are becoming rare.

In our surveys and interviews it was clear IT managers fully understood the issue, need, and risk of failure relative to disaster recovery and backup.

Cloud storage, when explained and understood, would help solve the problem.  As a first step, and assuming a successful first step, pushing disaster recovery (at least on the level of backups) into cloud storage may be an important move ahead into a longer term move to cloud services.

All managers understood the potential benefits of virtual desktops, SaaS applications, and use of high performance virtualized infrastructure.  They did not always like it, but they understood within the next refresh generation of hardware and software technology, cloud computing would have an impact on their organization’s future.

But in the short term, disaster recovery and systems backup into cloud storage is the least traumatic first step ahead.

How about your organization?

A Cloud Computing Wish List for 2010

A cloud spot market allows commercial cloud service providers the ability to announce surplus or idle processing and storage capacity to a cloud exchange. The exchange allows A look into cloud development for 2010buyers to locate available cloud processing capacity, negotiate prices (within milliseconds), and deliver the commodity to customers on-demand.

Cloud processing and storage spot markets can be privately operated, controlled by industry organizations, or potentially government agencies. Spot markets frequently attract speculators, as cloud capacity prices are known to the public immediately as transactions occur.

The 2010 cloud spot market allows commercial cloud service providers to support both franchise (dedicated service level agreement) customers, as well as on-demand customers to participate in a spot market that allows customers to automatically move their applications and storage to providers offering the best pricing and service levels based on a pre-defined criteria.

I don’t really care who’s CPUs and disk I am using, I really only care that it is there when I want it, offers adequate performance, has proximity to my end users, and meets my pricing expectations.

Cloud Storage Using SSDs on the Layer 2 Switch

Content delivery networks/CDNs want to provide end users the best possible performance and quality – often delivering high volume video or data files. Traditionally CDNs build large storage arrays and processing systems within data centers, preferably adjacent to either a carrier hotel meet-me-room or Internet Exchange Point/IXP.

Sometimes supported by bundles of 10Gigabit ports connecting their storage to networks and the IXP.

Lots of recent discussion on topics such as Fiber Channel over Ethernet/FCoE and Fiber Channel over IP/FCoIP. Not good enough. I want the SSD manufacturers and the switch manufacturers to produce an SSD card with a form factor that fits into a slot on existing Layer 2 switches. I want a Petabyte of storage directly connected to the switch backplane allowing unlimited data transfer rates from the storage card to network ports.

Now a cloud storage provider does not have to buy 50 cabinets packed with SAN/NAS systems in the public data center, only slots in the switch.

IPv6

3tera got the ball rolling with IPv6 support in AppLogic. No more excuses. IPv6 support first, then add on IPv4 support as a failover to IPv6. The basic criteria to all other design issues. No IPv6 – then shred the design.

Cloud Standardization

Once again the world is being held hostage by equipment and software vendors posturing to make their product the industry standard. The user community is not happy. We want spot markets, the ability to migrate among cloud service providers when necessary, and a basis for future development of the technology and industry.

The IP protocols were developed through the efforts of a global community dedicated to making the Internet grow into a successful utility. Almost entirely supported through a global community of volunteers, the Internet Engineering Task Force and innovators banded together and built a set of standards (RFCs) for all to use when developing their hardware and applications.

Of course there were occasional problems, but their success is the Internet as it is today.

Standardization is critical in creating a productive development environment for cloud industry and market growth. There are several attempts to standardize cloud elements, and hopefully there will be consolidation of those efforts into a common framework.

Included in the efforts are the Distributed Management Task Force/DMTF Open Cloud Standards Incubator, Open Grid Forum’s Open Cloud Computing Interface working group, The Open Group Cloud Work Group, The Open Cloud Manifesto, the Storage Network Industry Association Cloud Storage Technical Work Group, and others.

Too many to be effective, too many groups serving their own purposes, and we still cannot easily write cloud applications that find the lower levels of cloud X as a Service/XaaS proprietary.

What is on your 2010 wish list?

Happy Cloud New Year!

%d bloggers like this: