3tera and AppLogic SWAG Moves to the Cloud Computing Retro Collection

CA and 3tera have announced CA’s acquisition of the innovative cloud computing Infrastructure as a Service vendor. This is a great thing for Computer Associates, and perhaps a bit sad for the cloud community in general. Why? It is hard to fit the energy and enthusiasm felt when walking into 3Tera’s Aliso Veijo office into words. A tight group of committed entrepreneurs and innovators, with a bit of cockiness due to the unique stature they held in the cloud computing community.

Not that Computer Associates is a bad company. In fact, they have always been one of the best kept secrets in business and enterprise software. Rock solid systems, professional sales and engineering – just not as well known to the broader community as other large enterprise systems vendors.

AppLogic brought the cloud community many firsts. The first to integrate IPv6 into their provisioning system. The first to really simplify the drag and drop provisioning process. Perhaps the first to really test and prove the concept of globally distributed processing and disaster recovery models. And they are really great guys.

Bert, Peter, Sean, and the rest of 3tera’s public face spent a tremendous amount of time supporting the community through participation in training events, community organizations such as the Convergence Technology Council of California, the Any2 Exchange Community – all with not only good community spirit, but also providing strong thought leadership to motivate the community into learning more about cloud computing and the future of information technology.

We will deeply miss 3tera, and hope the team will eventually regroup with a new set of ideas, and lead us into another generation of technology that will further enhance the industry’s ability to deliver a true, global, massively distributed cloud computing reality.

Computer Associates will bring value to the cloud community as well. With the power of CA’s organization behind recent acquisitions such as 3tera, Oblicore, NetQoS, Orchestria, Platinum Technology, Netreon, and others related to process, database and large data set management, the stage is set for increased competition in the cloud service industry. CA has the ability to provide a broad understanding of all aspects of enterprise and Internet-facing tools equal or better than IBM, Microsoft, or any other full-service integrator.

We will look forward to seeing the product of 3tera integration into the CA family, and hope the innovation and enthusiasm 3tera’s team brought to the cloud community is not swallowed up into a large company bureaucracy.

A Cloudy Future for Networks and Data Centers in 2010

The message from the VC community is clear – “don’t waste our seed money on network and server equipment.” The message from the US Government CIO was clear – the US Government will consolidate data centers and start moving towards cloud computing. The message from the software and hardware vendors is clear – there is an enormous Data Center within a Data Center Cloudinvestment in cloud computing technologies and services.

If nothing else, the economic woes of the past two years have taught us we need to be a lot smarter on how we allocate limited CAPEX and OPEX budgets. Whether we choose to implement our IT architecture in a public cloud, enterprise cloud, or not at all – we still must consider the alternatives. Those alternatives must include careful consideration of cloud computing.

Cloud 101 teaches us that virtualization efficiently uses compute and storage resources in the enterprise. Cloud 201 teaches us that content networks facing the Internet can make use of on-demand compute and storage capacity in close proximity to networks. Cloud 301 tells us that a distributed cloud gives great flexibility to both enterprise and Internet-facing content. The lesson plan for Cloud 401 is still being drafted.

Data Center 2010

Data center operators traditionally sell space based on cabinets, partial cabinets, cages, private suites, and in the case of carrier hotels, space in the main distribution frame. In the old days revenue was based on space and cross connects, today it is based on power consumed by equipment.

If the intent of data center consolidation is to relieve the enterprise or content provider of unnecessary CAPEX and OPEX burden, then the data center sales teams should be gearing up for a feeding frenzy of opportunity. Every public cloud service provider from Amazon down to the smallest cloud startup will be looking for quality data center space, preferably close to network interconnection points.

In fact, in the long run, if the vision of cloud computing and virtualization is true, then the existing model of data center should be seen as a three-dimensional set of objects within a resource grid, not entirely dissimilar to the idea set forth by Nicholas Carr in his book the “Big Switch.”

Facilities will return to their roots of concrete, power, and air-conditioning, adding cloud resources (or attracting cloud service providers to provide those resources), and the cabinets, cages, and private suites will start being dismantled to allow better use of electrical and cooling resources within the data center.

Rethinking the Data Center

Looking at 3tera‘s AppLogic utility it brings a strange vision to mind. If I can build a router, switch, server, and firewall into my profile via a drag and drop utility, then why would I want to consider buying my own hardware?

If storage becomes part of the layer 2 switch, then why would I consider installing my own SAN, NAS, or fiber channel infrastructure? Why not find a cloud service provider with adequate resources to run my business within their infrastructure, particularly if their network proximity and capacity is adequate to meet any traffic requirement my business demands?

In this case, if the technology behind AppLogic and other similar Platform as a Service (PaaS) is true to the marketing hype, then we can start throwing value back to the application. The network, connectivity, and the compute/storage resource becomes an assumed commodity – much like the freeway system, water, or the electrical grid.

Flowing the Profile to the User

Us old guys used to watch a SciFi sitcom called “Max Headroom.” Max Headroom was a fictional character who lived within the “Ether,” being able to move around though computers, electrical grids – and pop up wherever in the network he desired. Max could also absorb any of the information within computer systems or other electronic intelligence sources, andFrom the old SciFi series Max Headroom deliver his findings to news reporters who played the role of investigative journalists.

We are entering an electronic generation not too different from the world of Max Headroom. If we use social networking, or public utility applications such as Hotmail, Gmail, or Yahoo Mail, our profile flows to the network point closest to our last request for application access. There may be a permanent image of our data stored in a mother ship, but the most active part of our profile is parsed to a correlation database near our access point.

Thus, if I am a Gmail user, and live in Los Angeles, my correlated profile is available at the Google data cache with correlated Gmail someplace with proximity to Los Angeles. If I travel to HongKong, then Gmail thinks “Hmmm…, he is in HK, and we should parse his Gmail image to our HK cache, and hope he gets the best possible performance out of the Gmail product from that point.”

I, as the user, do not care which data center my Gmail profile is cached at, I only care that my end user experience is good and I can get my work done without unnecessary pain.

The data center becomes virtual. The application flows to the location needed to do the job and make me happy. XYZ.Com, who does my mail day-to-day, must understand their product will become less relevant and ineffective if their performance on a global scale does not meet international standards. Those standards are being set by companies who are using cloud computing on a global, distributed model, to do the job.

2010 is the Year Data Centers Evolve to Support the Cloud

The day of a 100sqft data center cage is rapidly becoming as senseless as buying a used DMS250. The cost in hardware, software, peopleware, and the operational expense of running a small data center presence simply does not make sense. Nearly everything that can be done in a 100sqft cage can be done in a cloud, forcing the services provider to concentrate on delivering end user value, and leaving the compute, storage, and network access to utility providers.

And when the 100sqft cage is absorbed into a more efficient resource, the cost – both in electrical/mechanical and cost (including environmental costs) will drop by a factor of nearly 50%, given the potential for better data center management using strict hot/cold aisle separation, hot or cold aisle containment, containers – all those things data center operators are scrambling to understand and implement.

Argue the point, but by the end of 2010, the ugly data center caterpillar will come out of its cocoon as a better, stronger, and very cloudy utility for the information technology and interconnected world to exploit.

A Cloud Computing Wish List for 2010

A cloud spot market allows commercial cloud service providers the ability to announce surplus or idle processing and storage capacity to a cloud exchange. The exchange allows A look into cloud development for 2010buyers to locate available cloud processing capacity, negotiate prices (within milliseconds), and deliver the commodity to customers on-demand.

Cloud processing and storage spot markets can be privately operated, controlled by industry organizations, or potentially government agencies. Spot markets frequently attract speculators, as cloud capacity prices are known to the public immediately as transactions occur.

The 2010 cloud spot market allows commercial cloud service providers to support both franchise (dedicated service level agreement) customers, as well as on-demand customers to participate in a spot market that allows customers to automatically move their applications and storage to providers offering the best pricing and service levels based on a pre-defined criteria.

I don’t really care who’s CPUs and disk I am using, I really only care that it is there when I want it, offers adequate performance, has proximity to my end users, and meets my pricing expectations.

Cloud Storage Using SSDs on the Layer 2 Switch

Content delivery networks/CDNs want to provide end users the best possible performance and quality – often delivering high volume video or data files. Traditionally CDNs build large storage arrays and processing systems within data centers, preferably adjacent to either a carrier hotel meet-me-room or Internet Exchange Point/IXP.

Sometimes supported by bundles of 10Gigabit ports connecting their storage to networks and the IXP.

Lots of recent discussion on topics such as Fiber Channel over Ethernet/FCoE and Fiber Channel over IP/FCoIP. Not good enough. I want the SSD manufacturers and the switch manufacturers to produce an SSD card with a form factor that fits into a slot on existing Layer 2 switches. I want a Petabyte of storage directly connected to the switch backplane allowing unlimited data transfer rates from the storage card to network ports.

Now a cloud storage provider does not have to buy 50 cabinets packed with SAN/NAS systems in the public data center, only slots in the switch.

IPv6

3tera got the ball rolling with IPv6 support in AppLogic. No more excuses. IPv6 support first, then add on IPv4 support as a failover to IPv6. The basic criteria to all other design issues. No IPv6 – then shred the design.

Cloud Standardization

Once again the world is being held hostage by equipment and software vendors posturing to make their product the industry standard. The user community is not happy. We want spot markets, the ability to migrate among cloud service providers when necessary, and a basis for future development of the technology and industry.

The IP protocols were developed through the efforts of a global community dedicated to making the Internet grow into a successful utility. Almost entirely supported through a global community of volunteers, the Internet Engineering Task Force and innovators banded together and built a set of standards (RFCs) for all to use when developing their hardware and applications.

Of course there were occasional problems, but their success is the Internet as it is today.

Standardization is critical in creating a productive development environment for cloud industry and market growth. There are several attempts to standardize cloud elements, and hopefully there will be consolidation of those efforts into a common framework.

Included in the efforts are the Distributed Management Task Force/DMTF Open Cloud Standards Incubator, Open Grid Forum’s Open Cloud Computing Interface working group, The Open Group Cloud Work Group, The Open Cloud Manifesto, the Storage Network Industry Association Cloud Storage Technical Work Group, and others.

Too many to be effective, too many groups serving their own purposes, and we still cannot easily write cloud applications that find the lower levels of cloud X as a Service/XaaS proprietary.

What is on your 2010 wish list?

Happy Cloud New Year!

Insulating Innovators from Infrastructure with Cloud Computing

The SMS message was desperate. AJ sent the plea “If I have to see one more picture of a cloud in a PPT I might lose it…” After two days of presentations at the Cloud Computing Conference and Expo, where companies tried to bring the audience up to an Intro to Clouds 101 level, some attendees were grasping for new ideas, new information, new reasons why companies should release their IT models currently based on strict FUD-Factor (Fear, Uncertainty, and Doubt) compliance, to the new generation of cloud computing.

The “same slides, different day” approach was starting leave some Shelton Shugar - Yahooattendees a bit glazed, until Shelton Shugar, SVP of Cloud Computing at Yahoo! kicked off the morning with his keynote speech “Accelerating Innovation with Cloud Computing.” Shugar woke the audience up with an overview of how Yahoo! Is “walking the talk” with cloud computing deployments in their own network.

Yahoo! Mail, Sports, Finance, and other applications – all are using some level of cloud compute support based on HADOOP. Shugar detailed Yahoo’s support of the open source community through their “Open Cirrus” program. Not only aggressive cloud computing thought leadership, but actual industry leadership.

Insulating Innovators

Perhaps the most enlightening “sound bite” of the morning is Shugar’s statement that cloud computing relieves the developers from spending time on IT, allowing them to “focus time on their (business) problems, and not on the infrastructure.”

This is really significant. Having joined several presentations at the Cloud Computing Conference and Expo in Santa Clara, mostly repeating the same lines of reduced OPEX, CAPEX, energy savings, IaaS, PaaS, Saas, and so on, Shugar finally started bringing the ideas into a perspective business managers could relate to their own professional pain points, as well as open new ideas of what value this cloud “thing” might actually offer.

I remember in the old days (of the ’90s) while working at a telecom company breaking into the emerging Internet industry. We had a training section which consumed a lot of their schedule supporting remote access training for NOC (network operations center) technicians needing high level access to servers and routers. The training section maintained dozens of switches, routers, and servers in a computer room to support the training environment.

Each student needed practice working at the command line interface of network hardware, however in their day-to-day job they would never need to physically touch a network device, as the actual device could be located anyplace around the world – they simply need to practice troubleshooting and monitoring through remote access.

Looking around the conference hall at the Cloud Computing Conference, companies such as 3tera offer a provisioning tool that is able to automatically produce images of servers, switches, and routers within a virtual environment. You need a new LINUX box, you drag and drop a pre-configured LINUX image into your environment. It “spools” and is ready for access within about 2 seconds. From the user’s perspective, it is a physical LINUX server that could very well be mounted in the next room. The object functions exactly as a physical server would behave.

Within the virtual environment the instructor (or students) could spool up as many virtual images of the LINUX box as needed to meet the class’ training requirements. The instructor and training division no longer has to spend a lot of time each day wiping servers, reloading images, replacing failed memory or hard drives – any of the non-productive tasks that traditionally prevented them from spending their valuable time building better training curriculum, spending more time with their students, or delivering the course as an eLearning course anyplace in the company.

Now apply the same idea to any job where you have either knowledge workers or manual workers spending any amount of their time working on IT infrastructure-related tasks which do not directly produce revenue or some level of customer service (a broad category). Even better if you consider the supporting IT infrastructure may not even be in the same building, city, or even region. You may be getting your applications and IT support through a public cloud service provider (CSP) physically located in a different country!

The idea of insulating your knowledge workers from the IT infrastructure is one more item for our bag of 30 second cloud elevator pitches. It is great when such as simple statement can have such profound meaning. Looking around the auditorium, when Shugar may the statement and described the need to insulate our knowledge workers from the burden of IT infrastructure operations and management, I could see about 1000 pairs of eyes lighten, eyebrows rise a bit higher into the foreheads, and smiles appear on the faces of attendees who finally breeched the layer of skepticism and fog which had drawn them to the conference.

The rest of the conference will now be a much more free and productive use of their new enthusiasm for knowledge on cloud computing, what it is today, and what innovations they will be able to apply to cloud computing platforms and infrastructure in the future.

John Savageau, Long Beach (from the Cloud Computing Conference and Expo, Santa Clara, California)

3tera Drops IPv6 Into Their AppLogic Cloud Computing Platform

“If we look at cloud (service) in a global sense, not just as my service or your service, or my country or your country, then IPv6 is part of the future and the solution.” (Bert Armijo, SVP 3tera)

IPv6 is hitting everybody in the Internet industry on a global scale. 3tera recognized early in the evolution of cloud products that IPv6 was critical for long term, and short term development of their AppLogic product within both public-facing Internet services, as well as cloud deployments within the enterprise. The need is real.

The IPv4 Reality 3tera Faced

The Internet operates with devices connecting to each other on a global scale. Each device, wither a physical switch, server, or computer, has an address. Each application, piece of data, and content is located in the Internet through use of an address. Everything in the Internet uses an address. Currently address defined in the Internet Protocol Version 4 (IPv4) is the most widely used series and sets of addresses. And we’ve used up most of the available addresses.

The American Registry for Internet Numbering (ARIN) is very clear on the dangers of ignoring the velocity of IPv4 address depletion. ARIN is very clear that less than 15% of the available IPv4 address space remains for distribution in the global community, and if depleted, the Internet will stop growth at that point. There may be temporary measures and “work-arounds” to get us through the near term, however the cold hard fact remains that our Internet is in danger of running out of address space.

In a notice sent to the US Internet community they were very clear in a couple of points what will happen if the US Internet community ignores the need to adopt and migrate to IPv6:

BE IT RESOLVED, that this Board of Trustees hereby advises the Internet community that migration to IPv6 numbering resources is necessary for any applications which require ongoing availability from ARIN of contiguous IP numbering resources; and,

BE IT RESOLVED, that this Board of Trustees hereby requests the ARIN Advisory Council to consider Internet Numbering Resource Policy changes advisable to encourage migration to IPv6 numbering resources where possible. https://www.arin.net/knowledge/about_resources/v6/v6-resolution.html

While IPv4 gives the Internet-connected world about 4.2 billion address, IPv6 gives the world around 3.4 x 1038 addresses. That is a bunch of addresses… Enough to get our planet through a couple more generations of Internet users, and well enough to connect virtually every possible virtual or physical device we as a species are likely to need in the next thousand years or so (OK, maybe a “forward looking statement”).

3Tera Takes on IPv6

The Los Angeles area is good place to meet the folks at 3tera, familiar faces at local industry and community events. Sometimes considered scary in their vision of the network and applications-enabled future, sometimes considered really good guys who are a lot of fun to talk with at a conference, or in the parking lot after an event. Normal guys, until they start talking about their trade. And 3tera guys are very serious about their trade.

AppLogic InterfaceMake a seemingly simple comment like, “what have you done in 3tera’s product AppLogic for developing IPv6?,” and you are awarded with a cold stare, silence, and the fear you have either said something so incredibly stupid that it is a conversation killer, or you have struck a nerve.

With Peter Nikolov (President, CTO, and COO) and Bert Armijo (SVP Sales, Marketing, Product Management, and about everything else…) the IPv6 nerve ran deep, and understanding their position in the cloud computing market, the critical issue of IPv4 depletion, the enabling qualities of adopting IPv6, and the reality our planet will need leaders in the IPv6 space, 3tera rolled up their sleeves, put more coffee in the room, and started breaking down the problem.

And on October 1st, 3tera formally launched IPv6 support in AppLogic

While in the US we have started creating awareness in the need to move to IPv6, Bert reminds us that overall, the urgency to accelerate IPv6-enabled applications and network support is more acute in Europe and Asia than in the United States. “Internationally there are fewer IPv4 addresses available (through the regional Internet address registries), addresses are harder to order (longer and more complex justification process), and are much more expensive.”

Thus 3tera has recently received much more interest in their IPv6 product planning and roadmap from both Asia and Europe than even in the US, which should also serve as a wakeup call for Americans.

What Does 3tera’s Implementation of IPv6 do for the Client?

Bert Armijo understands that building an IT infrastructure in a company is difficult enough, without the additional burden of planning for migrations, restacking applications, renumbering applications, and deploying those applications is tough. The whole philosophy of building into a cloud is to enable rapid deployment of presence and applications, and be able to control the cost of labor and capital needed for both organic and season (event driven) growth.

“We wanted to break this problem down to the simplicity of a software appliance,” advised Bert. “We built IPv6 support into Applogic (3tera’s cloud operating system and main product) as a drag and drop appliance, which when using an existing (Applogic-enabled service) would allow the user to drag and drop the IPv6 appliance into their application and automatically configure the application for IPv6 support.”

“…until now, IPv6 adoption has been slowed by the perception that it requires both support on the client side and complex code changes in applications. With its AppLogic cloud computing platform, it is no longer necessary to make changes in the configuration of the software in order to be able to support IPv6, while still keeping the data available to IPv4 users.” (from 3tera PR)

This of course works from the ground up as well, offering support to build infrastructure in native IPv6. Something the Asians and Europeans are jumping on, and something the American IT community should seriously consider.

Bert goes through a list of applications that are being made aware of IPv6 within clouds, including mobile telephony, video distribution, and security. However the most urgent customer demands fall into both disaster recovery and security implementations through the cloud.

While it is clear cloud has great support, and is starting to meet customer expectations for application and resource management – such as being able to move and schedule resources based on time or geography, there are a couple of interesting implications which are becoming apparent.

The Intercontinental Cloud

“First production implementations (for IPv6 in AppLogic) are for VPNs,” said Bert. “Predominantly for VPNs between continents, such as Asia to the USA, and Europe to the USA.”

Using IPv6 within a wide area distributed network, with the advanced security potential offered within the protocol, brings up some interesting ideas. Such as what is the future of wide area MPLS networks?

If the cloud offers the same level of security, portability, and ability for applications to easily move large data sets within a wide area, both for proximity-based process, least-cost processing, and disaster recovery purposes, then we might have some very interesting developments in the future.

What Next?

There are clearly many more aspects of 3tera’s IPv6 implementation into their AppLogic product, and it is still very early in the development process. Other cloud vendors will eventually bring out their own versions of IPv6 support. Eventually the American IT and Internet industry will awaken to the urgency of not only IPv4 depletion, but also start becoming more involved envisioning what other applications and services may emerge or become possible through a network and cloud resource world running IPv6.

Verizon Wireless will drive their LTE -> 4G network on IPv6, and the entertainment community is frantically learning what the protocol can do for the future of video. President Obama’s CIO has laid down the gauntlet on the US Government to adopt IPv6, and the Asian/European community is beginning to look at the US Internet market as a roadblock in global technology development.

We need to keep our eyes and minds open on what is happening in the IPv6 world, as well as look inside at our own businesses and organizations. We are rapidly approaching one of those points in history where our future will be defined based on our ability to plan ahead for a disruption in technology, the impact of market globalization on even mom and pop businesses (I can order breakfast cereal over the Internet!).

IPv6 will be part of that future. 3tera will be part of that future. An exciting future, and we cannot wait to see what emerges from 3tera’s research and development team next.

John Savageau, Long Beach

%d bloggers like this: