A Cloud Computing Wish List for 2010

A cloud spot market allows commercial cloud service providers the ability to announce surplus or idle processing and storage capacity to a cloud exchange. The exchange allows A look into cloud development for 2010buyers to locate available cloud processing capacity, negotiate prices (within milliseconds), and deliver the commodity to customers on-demand.

Cloud processing and storage spot markets can be privately operated, controlled by industry organizations, or potentially government agencies. Spot markets frequently attract speculators, as cloud capacity prices are known to the public immediately as transactions occur.

The 2010 cloud spot market allows commercial cloud service providers to support both franchise (dedicated service level agreement) customers, as well as on-demand customers to participate in a spot market that allows customers to automatically move their applications and storage to providers offering the best pricing and service levels based on a pre-defined criteria.

I don’t really care who’s CPUs and disk I am using, I really only care that it is there when I want it, offers adequate performance, has proximity to my end users, and meets my pricing expectations.

Cloud Storage Using SSDs on the Layer 2 Switch

Content delivery networks/CDNs want to provide end users the best possible performance and quality – often delivering high volume video or data files. Traditionally CDNs build large storage arrays and processing systems within data centers, preferably adjacent to either a carrier hotel meet-me-room or Internet Exchange Point/IXP.

Sometimes supported by bundles of 10Gigabit ports connecting their storage to networks and the IXP.

Lots of recent discussion on topics such as Fiber Channel over Ethernet/FCoE and Fiber Channel over IP/FCoIP. Not good enough. I want the SSD manufacturers and the switch manufacturers to produce an SSD card with a form factor that fits into a slot on existing Layer 2 switches. I want a Petabyte of storage directly connected to the switch backplane allowing unlimited data transfer rates from the storage card to network ports.

Now a cloud storage provider does not have to buy 50 cabinets packed with SAN/NAS systems in the public data center, only slots in the switch.

IPv6

3tera got the ball rolling with IPv6 support in AppLogic. No more excuses. IPv6 support first, then add on IPv4 support as a failover to IPv6. The basic criteria to all other design issues. No IPv6 – then shred the design.

Cloud Standardization

Once again the world is being held hostage by equipment and software vendors posturing to make their product the industry standard. The user community is not happy. We want spot markets, the ability to migrate among cloud service providers when necessary, and a basis for future development of the technology and industry.

The IP protocols were developed through the efforts of a global community dedicated to making the Internet grow into a successful utility. Almost entirely supported through a global community of volunteers, the Internet Engineering Task Force and innovators banded together and built a set of standards (RFCs) for all to use when developing their hardware and applications.

Of course there were occasional problems, but their success is the Internet as it is today.

Standardization is critical in creating a productive development environment for cloud industry and market growth. There are several attempts to standardize cloud elements, and hopefully there will be consolidation of those efforts into a common framework.

Included in the efforts are the Distributed Management Task Force/DMTF Open Cloud Standards Incubator, Open Grid Forum’s Open Cloud Computing Interface working group, The Open Group Cloud Work Group, The Open Cloud Manifesto, the Storage Network Industry Association Cloud Storage Technical Work Group, and others.

Too many to be effective, too many groups serving their own purposes, and we still cannot easily write cloud applications that find the lower levels of cloud X as a Service/XaaS proprietary.

What is on your 2010 wish list?

Happy Cloud New Year!

Deleting Your Hard Drives – Entering a Green Data Center Future of SSDs

For those of us old-timers who muscled 9-track tapes on 10 ft tall on Burroughs B-3500 mainframe computers tape drives, with a total storage capacity of about 5 kilobytes, the idea of sticking a 64 gigabyte SD memory chip into my laptop computer is pretty cosmic.

Disk DriveTerms like PCAM (punch card adding machines) are no longer part of the taxonomy of information technology, nor would any young person in the industry comprehend the idea of a disk platter or disk pack.

Skipping a bit ahead, we find a time when you could purchase an IBM “XT” computer with an integrated 10 megabyte hard drive. No more reliance on 5.25″ or later 3.5″ floppy disks. Hard drives evolved to the point “Fryes” will pitch you a USB or home network 1 terabyte drive for about $100.

Enter the SSD

October 2009 brings us to the point hard drives are now becoming a compromise solution. The SSD (Solid State Disk) has jumped on the data center stage. With MySpace’s announcement they are replacing all 1770 of their existing disk drive-based server systems with higher capacity SSDs, and quoted that SSDs use only 1% of the power required by disk drives, data center rules are set to change again.

SSDs are efficient. If you read press releases and marketing material supporting SSD sales you will hear numbers like:

  • “…single-server performance levels with 1.5GB/sec. throughput and almost 200,000 IOPS
  • … a 320GB ioDrive can fill a 10Gbit/sec. Ethernet pipe
  • … four ioDrive Duos in a single server can scale linearly, which provides up to 6GB/sec. of read bandwidth and more than 500,000 read IOPS (Fusion.io)

This means not only are you saving power per server, you are also able to pack a multiple of existing storage capacity into the same space as currently possible with traditional disk systems. As clusters of SSDs become possible through additional tech development of parallel systems, we need to mentally get our heads around the concept of a three dimensional storage system, rather than a linear systems used today.

The concept of RAID and tape backup systems may also become obsolete, as SSDs hold their images when primary power is removed.

Now companies like MySpace will be in a really great position to re-negotiate their data center and colocation deals, as their actual energy and space requirements will potentially be a fraction of existing installations. Even considering their growth potential, the reduction in actual power and space will no doubt give them more leverage to use in the data center agreements.

Why? Data center operators are now planning their unit costs and revenues based on power sales and consumption. If a company like MySpace is able to reduce their power draw by 30% or more, this represents a potentially huge opportunity cost to the data center in space and power sales. Advantage goes to the tenant.

The Economics of SSDs

Today, the cost of SSDs is slightly higher than traditional disk systems. Even with fiber channel or Infiniband supporting large disk (SAN or NAS) installations. According to Yahoo Tech the cost of an SSD is about 4 times that of a traditional disk. However they also indicate that cost is quickly dropping, and we will probably see near parity within the next 3~4 years.

Now, if we remember the claim MySpace made that with the SSD migration they will consume only 1% of the power used by traditional disk (that is only the disk, not the entire chassis or server enclosure). If you look through a great white paper (actually it is called a “Green Paper”) provided by Fusion.io you will see that implementation of their SSD systems in a large disk farm of 250 servers (components include main memory, 4xnet cache, 4x tier 1/2/3 storage, tape storage) you will see a reduction from 146.6kw to 32kw for the site.

Data centers can charge anywhere from $120~$225/kw, showing that we could potentially, if you believe the marketing material, see a savings of $20,000/month @ $180/kw. This would also represent 47 tons of carbon, using the Carbon Footprint Calculator.

Fusion .io reminds us that

“In 2006, U.S. data centers consumed an estimated 61 billion kilowatt-hours (kWh) of energy, which accounted for about 1.5% of the total electricity consumed in the U.S. that year, up from 1.2% in 2005. The total cost of that energy consumption was $4.5 billion, which is more than the electricity consumed by all color televisions in the country and is equivalent to the electricity consumption of about 5.8 million average U.S. households.

• Data centers’ cooling infrastructure accounts for about half of that electricity consumption.

• If current trends continue, by 2011, data centers will consume 100 billion kWh of energy, at a total annual cost of $7.4 billion and would necessitate the construction of 10 additional power plants. (from “Taming the Power Hungry Data Center”)”

When we consider the potential impact of data center consolidation through use of virtualization and cloud computing, and the rapid advancements of SSD technologies and capacities, we may be able to make a huge positive impact by reducing the load Internet, entertainment, content delivery, and enterprise systems will have on our use of electricity – and subsequent impact on the environment.

Of course we need to keep our eyes on the byproducts of technology (e-Waste), and ensure making improvements in one area does not create a nightmare in another part of our environment.

Some Additional Resources

StorageSearch.Com has a great listing of current announcements and articles both following and describing the language of the SSD technology and industry. There is still a fair amount of discussion on the quality and future direction of SSDs, however the future does look very exciting and positive.

For those of us who can still read the Hollerith coding on punch cards, the idea of >1.25TB on and SSD is abstract. But abstract in a fun, exciting way.

How do you feel about the demise of disk? Too soon to consider? Ready to install?

John Savageau, Long Beach

%d bloggers like this: