Gartner Data Center Conference Looks Into Open Source Clouds and Data Backup
December 5, 2012 Leave a comment
Day two of the Gartner Data Center Conference in Las Vegas continued reinforcing old topics, appearing at times to be either enlist attendees in contributing to Gartner research, or simply providing conference content directed to promoting conference sponsors.
For example, sessions “To the Point: When Open Meets Cloud” and “Backup/Recovery: Backing Up the Future” included a series of audience surveys. Those surveys were apparently the same as presented, in the same sessions, for several years. Thus the speaker immediately referenced this year’s results vs. results from the same survey questions from the past two years. This would lead a casual attendee to believe nothing radically new is being presented in the above topics, and the attendees are generally contributing to further trend analysis research that will eventually show up in a commercial Gartner Research Note.
Gartner analyst and speaker on the topic of “When Open Meets Clouds,” Aneel Lakhani, did make a couple useful, if not obvious points in his presentation.
- We cannot secure complete freedom from vendors, regardless of how much you adopt open source
- Open source can actually be more expensive than commercial products
- Interoperability is easy to say, but a heck of a lot more complicated to implement
- Enterprise users have a very low threshold for “test” environments (sorry DevOps guys)
- If your organization has the time and staff, test, test, and test a bit more to ensure your open source product will perform as expected or designed
However analyst Dave Russell, speaker on the topic of “Backup/Recovery” was a bit more cut and paste in his approach. Lots of questions to match against last year’s conference, and a strong emphasis on using tape as a continuing, if not growing media for disaster recovery.
Problem with this presentation was the discussion centered on backing up data – very little on business continuity. In fact, in one slide he referenced a recovery point objective (RPO) of one day for backups. What organization operating in a global market, in Internet time, can possibly design for a one day RPO?
In addition, there was no discussion on the need for compatible hardware in a disaster recovery site that would allow immediate or rapid restart of applications. Having data on tape is fine. Having mainframe archival data is fine. But without a business continuity capability, it is likely any organization will suffer significant damage in their ability to function in their marketplace. Very few organizations today can absorb an extended global presence outage or marketplace outage.
The conference continues until Thursday and we will look for more, positive approaches, to data center and cloud computing.