Kundra Scores Again with 25 Point Federal IT Implementation Plan

On December 9th Vivek Kundra, the U.S.Chief Information Officer (USCIO), released a “25 Point Plan to Reform Federal Information Technology Management.” Kundra acknowledges the cost of IT systems to the American people (~~$600billion during the past decade), and the reality that even with this investment the federal government lags behind private industry in both functionality and governance.

Kundra 25 Point PlanHighlights of the plan include a push towards data center consolidation, a “cloud first” policy for new IT projects (as well as IT refresh),a search and destroy mission looking for deadbeat and under-performing projects, as well as using professional program managers and acquisition specialists to streamline the purchase and implementation of IT systems. 

Sounds Good, But is it Real?

It is very possible the document was impressive and quite encouraging due to the talents of writers assigned to spin Kundra’s message. On the other hand, it all makes a lot of, well, plain good sense.

For example, on the topic of public private partnerships, and engaging industry early in the planning process.

Given the pace of technology change, the lag between when the government defines its requirements and when the contractor begins to deliver is enough time for the technology to fundamentally change, which means that the program may be outdated on the day it starts …

…In addition, requirements are often developed without adequate input from industry, and without enough communication between an agency’s IT staff and the program employees who will actually be using the hardware and software…

…As a result, requirements are too often unrealistic (as to performance, schedule, and cost estimates), or the requirements that the IT professionals develop may not provide what the program staff expect – or both.

This makes a lot of sense.  Face it, the government does not develop innovation or technology, private industry develops innovation.  And government, as the world’s largest IT users, consumes that technology.

And since the government is often so large, it is near impossible to for the government to collect and disseminate best practices and operational “lessons learned” at the same pace possible within private industry.  In private industry aggressive governance and cooperation with vendors are essential to survival and ultimate success as a company.

On Innovation

Small businesses in the technology space drive enormous innovation throughout the economy . However, the Federal Government does not fully tap into the new ideas created by small businesses…

…smaller firms are more likely to produce the most disruptive and creative innovations. In addition, with closer ties to cutting edge, ground-breaking research, smaller firms often have the best answers for the Federal Government

Kundra goes on to acknowledge the fact small companies are where innovation happens within any industry or market.  While Cisco, Microsoft, Google, and others such as Computer Associates have a wide range of innovative products and solutions, a large percentage of those ideas are from acquisitions absorbed in an effort to reinforce the large company’s market strategy.

Small, innovative companies produce disruptive ideas and technologies, and the federal government should not be prevented from exposure and potential purchase of products being developed outside of the Fortune 500.  Makes sense for the government, makes sense for the small business community.

Technology Fellows

Within 12 months, the office of the Federal CIO will create a technology fellows program and the accompanying recruiting infrastructure. By partnering directly with universities with well-recognized technology programs, the Federal Government will tap into the emerging talent pool and begin to build a sustainable pipeline of talent.

While projects sponsored by the National Science Foundation and Defense Advanced Project Research Agency (DARPA) have been around for a while, this is still a very refreshing attitude towards motivating both students and those who lead our students.

The American technology industry, while still the best in the world, works kind of like Cisco or Google. With a few exceptions, the skills and talent those companies need to maintain the competitive dominance in their market must be imported from other countries.  if you do not believe this, take a drive through Palo Alto, Milpitas, or stop for lunch on Tasman Drive in Santa Clara.  English is not always the dominant language.

However, that does not need to be the case, nor does the US tech brain pool need to revolve around Silicon Valley.  if the US Government and Kundra are true to this idea, then partnering with all levels of education throughout the United States to develop either high level technologies, or even small components of those technologies can only serve to increase the intellectual and subsequent technology capacity of our country.

People and companies rarely lose motivation when faced with attainable challenges or success – by nature they will gain additional and higher thresholds for additional successes. 

Cloud Computing is the Next Cyclone of Technology

Cloud Innovation as a CycloneOverall, everything in the 25 point plan eventually points back to cloud computing.  Like a low pressure system sucking in hot air and developing circulation, the CIO’s cloud computing strategy will continue to attract additional ideas and success for making Information and Communications Technology (ICT) efficient, and an enabling tool for our future growth.

Cloud Computing, within the context of the 25 point plan, enables data center consolidation, software innovation, public private partnerships, efficiency, transparency, “green” everything,

We need to replace these “stovepiped” efforts, which too often push in inconsistent directions, with an approach that brings together the stakeholders and integrates their efforts…

The cloud computing cyclone will not stop with the federal government.  Once the low begins to strengthen and develop circulation, it will continue sucking state government initiatives, local governments, the academic community, and industry into the “eye.” 

The financial benefits of converting wasted operational and capital budgets currently spent on building and maintaining inefficient systems into innovation and product development, or better program management for government and educational programs are essential in promoting economic growth, not to mention reducing a nightmare national deficit.

Hopping on the “Kundra Vision” Bandwagon

As Americans we need to expose ourselves to Kundra’s programs and strategy.  No strategy is perfect, and can benefit from the synergies of a country with 300 million citizens who have ideas, visions, and strong desires to contribute to a better America.  We need to push our ideas to both local and federal thought leaders, including the US CIO’s office.  Push through your representatives, through blogs, through your technology vendors.

If Kundra is good for his word, and this is the new vision for an American ICT-enabled future, your efforts will not be wasted.

Are Public Mail Systems a Danger in Developing Countries?

Over the past two years I’ve interviewed dozens of government ICT managers in countries throughout Asia, the Caribbean, and Europe.  One of the surprising items collected during the interviews is the large number of government employees – some at the highest levels, using public mail systems for their professional communications.

While this might appear as a non-issue with some, others might find it both a security issue (by using a foreign commercial company to process and store government correspondence), as well as an identity issue (by using an XXX@gmail.com or XXX@yahoo.com ) while communicating with a government employee or official.

Reasons provided in interviews concluded the reason why government employees are using commercial email systems include:

  • Lack of timely provisioning by government ICT managers
  • Concerns over lack of privacy within a government-managed email system
  • Desire to work from home or while mobile, and the government system does not support remote or web access to email (or the perception this is the case)
  • Actual mail system performance is better on public systems than internal government-operated systems
  • Government ICT systems have a high internal transfer cost, even for simple utilities such as email

and so on.

When pressed further, many were not aware of the risk that government correspondence processed through public systems potentially resulted in images being stored on storage systems probably located in other countries.  Depending on the country, that email image could easily be provided to foreign law enforcement agencies under lawful warrants – thus exposing potentially sensitive information for exploitation by a foreign government.

Are Public Email Accounts Bad?

Not at all.  Most of us use at least one personal email address on a public mail system, some many addresses.  Public systems allow on-demand user creation of accounts, and if desired allow individuals to create anonymous identities for use when using other social media or public networks. 

Public addresses can separate an individual’s online identity from their “real world” identity, allowing higher levels of privacy any anonymous participation in social media or other activities where the user wishes to not have their full identity revealed.

The addresses are also quite simple to use, cost nothing, and are in use around the world.

Governments are also starting to make better use of commercial or public email outsourcing, with the City of Los Angeles being one of the more well-known projects.  The City of LA has service level agreements with Google (their outsource company), assuring security an confidentiality, as well as operational service levels. 

This is no doubt going to be a continuing trend, with public private partnerships (PPPs) relieving government users from the burden of infrastructure and some applications management.  With the US CIO Vivek Kundra aggressively pushing the national data center consolidation and cloud computing agenda, the move towards hosted or SaaS applications will increase.

Many benefits here as well, including:

  1. Hosted mail systems may keep an image of mail in storage – much more secure than if an individual PC loses single images of mail from a POP server
  2. Access from any Internet connected workstation or computer (of course assuming good passwords and security)
  3. Standardization among organizational user (both for mail formatting and client use)
  4. Cheaper operating costs

To address recent budget and human resource challenges, the City of Orlando moved its e-mail and productivity solution to the cloud (application and cloud  hosting services provided by Google).  The City has realized a 65 percent reduction in e-mail costs and provided additional features to increase the productivity of workers. (CIO Council, State of Public sector Cloud Computing)

For developing countries this is probably a good thing – have all the features and services of the best in class email systems, while significantly reducing the cost and burden of developing physical data center facilities.

But for the meantime, as that strategy and vision is defined, the use of public or cloud hosted email services in many developing countries in one of convenience.  We will only hope that commercial email providers safeguard data processed by government user’s personal accounts, used for communicating all levels of government information, with the same service level agreements offered large users such as the City of LA or City of Orlando.

Government Clouds Take on the ESBaaS

Recent discussions with government ICT leadership related to cloud computing strategies have all brought the concept of Enterprise Service Bus as a Service into the conversation.

Now ESBs are not entirely new, but in the context of governments they make a lot of sense.  In the context of cloud computing strategies in governments they make a heck of a lot of sense.

Wikipedia defines an ESB as:

In computing, an enterprise service bus (ESB) is a software architecture construct which provides fundamental services for complex architectures via an event-driven and standards-based messaging engine (the bus). Developers typically implement an ESB using technologies found in a category of middleware infrastructure products, usually based on recognized standards.

Now if you actually understand that – then you are no doubt a software developer.  For the rest of us, this means that with the ESB pattern, participants engaging in service interaction communicate through a services or application “bus.” This bus could be a database, virtual desktop environment, billing/payments system, email, or other services common to one or more agencies. The ESB is designed to handle relationships between users with a common services and standardized data format.

New services can be plugged into the bus and integrated with existing services without any changes to the core bus service. Cloud users and applications developers will simply add or modify the integration logic.

Participants in a cross-organizational service interaction are connected to the Cloud ESB, rather than directly to one another, including: government-to-government, citizen-to-government, and business-to-government. Rules-based administration support will make it easier to manage ESB deployments through a simplified template allowing a better user experience for solution administrators.

The Benefits to Government Clouds

In addition to fully supporting a logical service-oriented architecture (SOA), the ESBaaS will enhance or provide:

  • Open and published solutions for managing Web services connectivity, interactions, services hosting, and services mediation environment
  • From development and maintenance perspective, the Government Cloud ESB allows agencies and users to securely and reliably share information between applications in a logical, cost effective manner
  • Government Cloud ESBs will simplify adding new services, or changing existing services, with minimal impact to the bus or other interfacing applications within the IT environment
  • Improvements in system performance and availability by offloading message processing and isolating complex mediation tasks in a dedicated ESB integration server

Again, possibly a mouthful, but if you can grasp the idea of a common bus providing services to a lot of different applications or agencies, allowing sharing of data and and interfaces without complex relationships between each participating agency, then the value becomes much more clear.

Why the Government Cloud?

While there are many parallels to large companies, governments are unique in the number of separate ministries, agencies, departments, and organizations within the framework of government.  Governments normally share a tremendous amount of in the past this data between each agency, and in the past this was extremely difficult due to organizational differences, lack of IT support, or individuals who simply did not want to share data with other agencies.

The result of course was many agencies built their own stand alone data systems, without central coordination, resulting in a lot of duplicate data items (such as an individual’s personal profile and information, business information, and land management information, and other similar data).  Most often, there were small differences in the data elements each agency developed and maintained, resulting in either corrupt or conflicting data.

The ESB helps identify a method of connecting applications and users to common data elements, allowing the sharing of both application format and in many cases database data sets.  This allows not only efficiency in software/applications development, but also a much higher level of standardization an common data sharing.

While this may be uncomfortable for some agencies, most likely those which do not want to share their data with the central government, or use applications that are standardized with the rest of government, this also does support a very high level of government transparency.  A controversial, but essential goal of all developing (and developed) governments.

As governments continue to focus on data center consolidation and the great economical, environmental, and enabling qualities of virtualization and on-demand compute resources, integration of the ESBaaS makes a lot of sense. 

There are some very nice articles related to ESBs on the net, including:

Which may help you better understand the concept, or give some additional ideas.

Let us know your opinion or ideas on ESBaaS

Disaster Recovery as a First Step into Cloud Computing

fire-articleOrganizations see the benefits of cloud computing, however many are simply mortified at the prospect of re-engineering their operations to fit into existing cloud service technology or architectures.  So how can we make the first step? 

We (at Pacific-Tier Communications) have conducted 103 surveys over the past few months in the US, Canada, Indonesia, and Moldova on the topic of cloud computing.  The surveys targeted both IT managers in commercial companies, as well as within government organizations.

The survey results were really no different than most – IT managers in general find cloud computing and virtualization an exciting technology and service development, but they are reluctant to jump into cloud for a variety of reas0ns, including:

  • Organization is not ready (including internal politics)
  • No specific budget
  • Applications not prepared for migration to cloud
  • and lots of other reasons

The list and reasoning for not going into cloud will continue until organizations get to the point they cannot avoid the topic, probably around the time of a major technology refresh.

Disaster Recovery is Different

The surveys also indi9cated another consistent trend – most organizations still have no formal disasters recovery plan.  This is particularly common within government agencies, including those state and local governments surveyed in the United States.

IT managers in many government agencies had critical data stored on laptop computers, desktops, or in most cases their organization operating data in a server closet with either no backup, or onsite backup to a tape system with no offsite storage.

In addition, the central or controlling government/commercial  IT organization had either no specific policy for backing up data, or in a worst case had no means of backing up data (central or common storage system) available to individual branch or agency users.

When asked if cloud storage, or even dedicated storage became available with reasonable technical ease, and affordable cost, the IT managers agreed, most enthusiastically, that they would support development of automated backup and individual workstation backup to prevent data loss and reinforce availability of applications.

Private or Public – Does it Make a Difference?

While most IT managers are still worshiping at the shrine of IT Infrastructure Control, there are cracks appearing in the “Great Walls of IT Infrastructure.”  With dwindling IT budgets, and diskexplosive user and organization IT utility demand, IT managers are slowly realizing the good old days of control are nearly gone.

And to add additional tarnish to pride, the IT managers are also being faced with the probability at least some of their infrastructure will find its way into public cloud services, completely out of their domain.

On the other hand, it is becoming more and more difficult to justify building internal infrastructure when the quality, security, and utility of public services often exceeds that which can be built internally.  Of course there are exceptions to every rule, which in our discussion includes requirements for additional security for government sensitive or classified information.

That information could include military, citizen identification data, or other similar information that while securable through encryption and partition management, politically(particularly in cases where the data could possible leave the borders of a country) may not be possible to extend beyond the walls of an internal data center.

For most other information, it is quickly becoming a simple exercise in financial planning to determine whether or not a public storage service or internal storage service makes more sense. 

The Intent is Disaster Recovery and Data Backup

Getting back to the point, with nearly all countries, and in particular central government properties, being on or near high capacity telecom carriers and networks, and the cost of bandwidth plummeting, the excuses for not using network-based off-site backups of individual and organization data are becoming rare.

In our surveys and interviews it was clear IT managers fully understood the issue, need, and risk of failure relative to disaster recovery and backup.

Cloud storage, when explained and understood, would help solve the problem.  As a first step, and assuming a successful first step, pushing disaster recovery (at least on the level of backups) into cloud storage may be an important move ahead into a longer term move to cloud services.

All managers understood the potential benefits of virtual desktops, SaaS applications, and use of high performance virtualized infrastructure.  They did not always like it, but they understood within the next refresh generation of hardware and software technology, cloud computing would have an impact on their organization’s future.

But in the short term, disaster recovery and systems backup into cloud storage is the least traumatic first step ahead.

How about your organization?

Data Centers Hitting a Wall of Cloud Computing

Equinix lowers guidance due to higher than expected churn in its data centers and price erosion on higher end customers.  Microsoft continues to promote hosted solutions and cloud computing.  Companies from Lee Technologies, CirraScale, Dell, HP, and SGI are producing containerized data centers to improve efficiency, cost, and manageability of high density server deployments.

The data center is facing a challenge.  The idea of a raised floor, cabinet-based data center is rapidly giving way to virtualization and highly expandable, easy to maintain, container farms.

The impact of cloud computing will be felt across every part of life, not least the data center which faces a degree of automation not yet seen.”

Microsoft CEO Steve Ballmer believes “the transition to the cloud <is> fundamentally changing the nature of data center deployment.” (Data Center Dynamics)

As companies such as Allied Fiber continue to develop visions of high density utility fiber ringing North America, with the added potential of dropping containerized cloud computing infrastructure along fiber routes and power distribution centers, AND the final interconnection of 4G/LTE/XYZ towers and metro cable along the main routes,the potential of creating a true 4th public utility of broadband with processing/storage capacity becomes clear.

Clouds Come of Age

Data center operators such as Equinix have traditionally provided a great product and service for companies wishing to either outsource their web-facing products into a facility with a variety of internet Service Providers or internet Exchange Points providing high performance network access, or eliminate the need for internal data center deployments through outsourcing IT infrastructure into a well-managed, secure, and reliable site.

However the industry is changing.  Companies, in particular startup companies. are finding there is no technical or business reason to manage their own servers or infrastructure, and that nearly all applications are becoming available on cloud-based SaaS (Software as a Service) hosted applications.

Whether you are developing your own virtual data center within a PaaS environment, or simply using Google Apps, Microsoft Hosted Office Applications, or other SaaS, the need to own and operate servers is beginning to make little sense.  Cloud service providers offer higher performance, flexible on-demand capacity, security, user management, and all the other features we have come to appreciate in the rapidly maturing cloud environment.

With containers providing a flexible physical apparatus to easily expand and distribute cloud infrastructure, as a combined broadband/compute utility, even cloud service providers are finding this a strong alternative to placing their systems within a traditional data center.

With the model of “flowing” cloud infrastructure along the fiber route to meet proximity, disaster recovery, or archival requirements, the container model will become a major threat to the data center industry.

What is the Data Center to Do?

Ballmer:

“A data center should be like a container – that you can put under a roof or a cover to stop it getting wet. Put in a slab of concrete, plumb in a little garden hose to keep it cool, yes a garden hose – it is environmentally friendly, connect to the network and power it up. Think of all the time that takes out of the installation.”

Data center operators need to rethink their concept of the computer room.  Building a 150 Megawatt, 2 million square foot facility may not be the best way to approach computing in the future.

Green, low powered, efficient, highly virtualized utility compute capacity makes sense, and will continue to make more sense as cloud computing and dedicated containers continue to evolve.  Containers supporting virtualization and cloud computing can certainly be secured, hardened, moved, replaced, and refreshed with much less effort than the “uber-data center.”

It makes sense, will continue to make even more sense, and if I were to make a prediction, will dominate the data delivery industry within 5~10 years.  If I were the CEO of a large data center company, I would be doing a lot of homework, with a very high sense of urgency, to get a complete understanding of cloud computing and industry dynamics.

Focus less on selling individual cabinets and electricity, and direct my attention to better understanding cloud computing and the 4th Utility of broadband/compute capacity.  I wouldn’t turn out the lights in my carrier hotel or data center quite yet, but this industry will be different in 5 years than it is today.

Given the recent stock volatility in the data center industry, it appears investors are also becoming concerned.

The Utility and Pain of Internet Peering

In the early 1990s TWICS, a commercial bulletin board service provider in Tokyo, jumped on the Internet. Access was very poor based on modern Internet speeds, however at the time 128kbps over frame relay (provided by Sprint international) was unique, and in fact represented the first truly commercial Internet access point in Japan.

The good old boys of the Japanese academic community were appalled, and did everything in their power to intimidate TWICS into disconnecting their connection, to the point of sending envelopes filled with razor blades to TWICS staff and the late Roger Boisvert (*), who through Intercon International KK acted as their project manager. The traditional academic community did not believe anybody outside of the academic community should ever have the right to access the Internet, and were determined to never let that happen in Japan.

Since the beginning, the Internet has been a dichotomy of those who wish to control or profit from the Internet, and those who envision potential and future of the Internet. Internet “peering” originally came about when academic networks needed to interconnect their own “Internets” to allow interchange of traffic and information between separately operated and managed networks. In the Internet academic “stone age” of the NSFNet, peering was a normal and required method of participating in the community. But,… if you were planning to send any level of public or commercial traffic through the network you would violate the NSFNET’s “acceptable use policy/AUP” preventing use of publically-funded networks for non-academic or government use.

Commercial internet Exchange Points such as the CIX, and eventually the NSF supported network access points/NAPs popped up to accommodate the growing interest in public access and commercial Internet. Face it, if you went through university or the military with access to the Internet or Milnet, and then jumped into the commercial world, it would be pretty difficult to give up the obvious power of interconnected networks bringing you close to nearly every point on the globe.

The Tier 1 Subsidy

To help privatize the untenable growth of the NSFNet (due to “utility” academic network access), the US Government helped pump up American telecom carriers such as Sprint, AT&T, and MCI by handing out contracts to take over control and management of the world’s largest Internet networks, which included the NSFNet and the NSF’s international Connection Managers bringing the international community into the NSFNet backbone.

This allowed Sprint, AT&T, and MCI to gain visibility into the entire Internet community of the day, as well as take advantage of their own national fiber/transmission networks to continue building up the NSFNet community on long term contracts. With that infrastructure in place, those networks were clear leaders in the development of large commercial internet networks. The Tier 1 Internet provider community is born.

Interconnection and Peering in the Rest of the World

In the Internet world Tier1 networks are required (today…), as they “see” and connect with all other available routes to individual networks and content providers scattered around the world. Millions and millions of them. The Tier 1 networks are also generally facility-based network providers (they own and operate metro and long distance fiber optic infrastructure) which in addition to offering a global directory for users and content to find each other, but also allows traffic to transit their network on a global or continental scale.

Thus a web hosting company based in San Diego can eventually provide content to a user located in Jakarta, with a larger network maintaining the Internet “directory” and long distance transmission capacity to make the connection either directly or with another interconnected network located in the “distant end” country.

Of course, if you are a content provider, local internet access provider, regional network, or global second tier network, this makes you somewhat dependant on one or more “Tier 1s” to make the connection. That, as in all supply/demand relationships, may get expensive depending on the nature of your business relationship with the “transit” network provider.

Thus, content providers and smaller networks (something less than a Tier 1 network) try to find places to interconnect that will allow them to “peer” with other networks and content providers, and wherever possible avoid the expense of relying on a larger network to make the connection. Internet “Peering.”

Peering Defined (Wikipedia)

Peering is a voluntary interconnection of administratively separate Internet
networks for the purpose of exchanging traffic between the customers of each network. The pure definition of peering is settlement-free or “sender keeps all,” meaning that neither party pays the other for the exchanged traffic; instead, each derives revenue from its own customers. Marketing and commercial pressures have led to the word peering routinely being used when there is some settlement involved, even though that is not the accurate technical use of the word. The phrase “settlement-free peering” is sometimes used to reflect this reality and unambiguously describe the pure cost-free peering situation.

That is a very “friendly” definition of peering. In reality, peering has become a very complicated process, with a constant struggle between the need to increase efficiency and performance on networks, to gaining business advantage over competition.

Bill Norton, long time Internet personality and evangelist has a new web site called “DR Peering,” which is dedicated to helping Internet engineers and managers sift through the maze of relationships and complications surrounding Internet peering. Not only the business of peering, but also in many cases the psychology of peering.

Peering Realities

In a perfect world peering allows networks to interconnect, reducing the number of transit “hops” along the route from points “A” to “B,” where either side may represent users, networks, applications, content, telephony, or anything else that can be chopped up into packets, 1s and 0s, and sent over a network, giving those end points the best possible performance.

Dr Peering provides an “Intro to Peering 101~204,” reference materials, blogs, and even advice columns on the topic of peering. Bill helps “newbies” understand the best ways to peer, the finances and business of peering, and the difficulties newbies will encounter on the route to a better environment for their customers.

And once you have navigated the peering scene, you realize we are back to the world of who wants to control, and who wants to provide vision. While on one level peering is determined by which vendor provides the best booze and most exciting party at a NANOG “Beer and Gear” or after party, there is another level you have to deal with as the Tier 1s, Tier 1 “wanna-be networks,” and global content providers jockey for dominance in their defined environment.

At that point it becomes a game, where personalities often take precedence over business requirements, and the ultimate loser will be the end user.

Another reality. Large networks would like to eliminate smaller networks wherever possible, as well as control content within their networks. Understandable, it is a natural business objective to gain advantage in your market and increase profits by rubbing out your competition. In the Internet world that means a small access network, or content provider, will budget their cost of global “eyeball or content” access based on the availability of peering within their community.

The greater the peering opportunity, the greater the potential of reducing operational expenses. Less peering, more power to the larger Tier 1 or regional networks, and eventually the law of supply and demand will result in the big networks increasing their pricing, diluting the supply of peers, and increasing operational expenses. Today transit pricing for small networks and content providers is on a downswing, but only because competition is fierce in the network and peering community supported by exchanges such as PAIX, LINX, AMS-IX, Equinix, DE-CIX, and Any2.

At the most basic level, eyeballs (users) need content, and content has no value without users. As the Internet becomes an essential component of everybody on the planet’s life, and in fact becomes (as the US Government has stated) a “basic right of every citizen,” then the existing struggle for internet control and dominance among individual players becomes a hindrance or roadblock in the development of network access and compute/storage capacity as a utility.

The large networks want to act as a value-added service, rather than a basic utility, forcing network-enabled content into a tiered, premium, or controlled commodity. Thus the network neutrality debates and controversy surrounding freedom of access to applications and content.

This Does Not Help the Right to Broadband and Content

There are analogies provided for just about everything. Carr builds a great analogy between cloud computing and the electrical grid in his book the “Big Switch.” The Internet itself is often referred to as the “Information Highway.” The marriage of cloud computing and broadband access can be referred to as the “4th Utility.”

Internet protocols and technologies have become, and will continue to be reinforced as a part of the future every person on our planet will engage over the next generations. This is the time we should be laying serious infrastructure pipe, and not worrying about whose content should be preferred, settlements between networks, and who gives the best beer head at a NANOG party.

At this point in the global development of Internet infrastructure, much of the debate surrounding peering – paid or unpaid, amounts to noise. It is simply retarding the development of global Internet infrastructure, and may eventually prevent the velocity of innovation in all things Internet the world craves to bring us into a new generation of many-to-many and individual communications.

The Road Ahead

All is not lost. There are visionaries such as Hunter Newby aggressively pushing development of infrastructure to “address America’s need to eliminate obstacles for broadband access, wireless backhaul and lower latency through new, next generation long haul dark fiber construction with sound principles and an open access philosophy.”

Oddly, as a lifelong “anti-establishment” evangelist, I tend to think we need better controls by government over the future of Internet and Internet vision. Not by the extreme right wing nuts who want to ensure the Internet is monitored, regulated, and restricted to those who meet their niche religions or political cults, but rather on the level of pushing an agenda to build infrastructure as a utility with sufficient capacity to meet all future needs.

The government should subsidize research and development, and push deployment of infrastructure much as the Interstate Highway System and electrical and water utilities. You will have to pay for the utility, but you will – as a user – not be held hostage to the utility. And have competition on utility access.

In the Internet world, we will only meet our objectives if peering is made a necessary requirement, and is a planned utility at each potential geographic or logical interconnection point. In some countries such as Mongolia, an ISP must connect to the Mongolia Internet Exchange as a requirement of receiving an ISP license. Why? Mongolia needs both high performance access to the global Internet – as well as high performance access to national resources. It makes a lot of sense. Why give an American, Chinese, or Singaporean money to send an email from one Mongolian user to another Mongolian user (while in the same country)? Peering is an essential component of a healthy Internet.

The same applies to Los Angeles, Chicago, Omaha, or any other location where there is proximity between the content and user, or user and user. And peering as close to the end users as technically possible supports all the performance and economic benefits needed to support a schoolhouse in Baudette (Minn), without placing an undue financial burden on the local access provider based on predatory network or peering policies mandated by regional or Tier 1 networks.

We’ve come a long way, but are still taking baby steps in the evolution of the Internet. Let’s move ahead with a passion and vision.

(*)  Roger Boisvert was a friend for many years, both during my tensure as  US Air Force officer and telecom manager with Sprint based in Tokyo (I met him while he was still with McKinsey and a leader in the Tokyo PC User’s Group), and afterwards through different companies, groups, functions, and conferences in Japan and the US.  Roger was murdered in Los Angeles nine years ago, and is a true loss to the internet community, not only in Japan but throughout the world.

Data Center Consolidation and Cloud Computing in Indonesia

2010 brings great opportunities and challenges to IT organizations in Indonesia. Technology refresh, aggressive development of telecom and Internet infrastructure, with aggressive deployment of “eEverything” is shaking the ICT industry. Even the most steadfast division-level IT managers are beginning to recognize the futility in trying to maintain their own closet “data Skyline near the Jakarta Stock Exchangecenter” in a world of virtualization, cloud computing, and drive to increase both data center economics and data security.

Of course there are very good models on the street for data center consolidation, particularly on government levels. In the United States, the National Association of State Chief Information Officers (NASCIO) lists data center consolidation as the second highest priority, immediately after getting better control over managing budget and operational cost.

In March the Australian government announced a (AUD) $1 billion data center consolidation plan, with standardization, solution sharing, and developing opportunities to benefit from “new technology, processes or policy.”

Minister for Finance and Deregulation Lindsay Tanner noted Australia currently has many inefficient data centers, very suitable candidates for consolidation and refresh. The problem of scattered or unstructured data management is “spread across Australia, (with data) located in not just large enterprise data centres, but also in cupboards, converted offices, computer and server rooms, and in commercial and insourced data centers,” said Tanner.

These are primarily older data centres that are reaching the limits of their electricity supply and floor space. With government demand for data center ICT equipment rising by more than 30 per cent each year, it was clear that we needed to reassess how the government handled its data center activities.”

The UK government also recently published ICT guidance related to data center consolidation, with a plan to cut government operated data center from 130 to around 10~12 facilities. The guidance includes the statement “Over the next three-to-five years, approximately 10-12 highly resilient strategic data centers for the public sector will be established to a high common standard. This will then enable the consolidation of existing public data centers into highly secure and resilient facilities, managed by expert suppliers.”

Indonesia Addresses Data Center Consolidation

Indonesia’s government is in a unique position to take advantage of both introducing new data center and virtualization technology, as well as deploying a consolidated, distributed data center infrastructure that would bring the additional benefit of strong disaster recovery capabilities.

Much like the problems identified by Minister Tanner in Australia, today many Indonesian government organizations – and commercial companies – operate ICT infrastructure without structure or standards. “We cannot add additional services in our data center,” mentioned one IT manager interviewed recently in a data center audit. “If our users need additional applications, we direct them to buy their own server and plug it in under their desk. We don’t have the electricity in our data center to drive new applications and hardware, so our IT organization will now focus only on LAN/WAN connectivity.”

While all IT managers understand disaster recovery planning and business continuity is essential, few have brought DR from PowerPoint to reality, putting much organization data on individual servers, laptops, and desktop computers. All at risk for theft or loss/failure of single disk systems.

basic map showing palapa ringThat is all changing. Commercial data centers are being built around the country by companies such as PT Indosat, PT Telekom, and other private companies. With the Palapa national fiber ring nearing completion, all main islands within the Indonesian archipelago are connected with diverse fiber optic backbone capacity, and additional international submarine cables are either planned or in progress to Australia, Hong Kong, Singapore, and other communication hubs.

For organizations currently supporting closet data centers, or local servers facing the public Internet for eCommerce or eGovernment applications, data centers such as the Cyber Tower in Jakarta offer both commercial data center space, as well as supporting interconnections for carriers – including the Indonesia Internet Exchange (IIX), in a similar model as One Wilshire, The Westin Building, or 151 Front in Toronto. Ample space for outsourcing data center infrastructure (particularly for companies with Internet-facing applications), as well as power, cooling, and management for internal infrastructure outsourcing.

The challenge, as with most other countries, is to convince ICT managers that it is in their company or organization’s interest to give up the server. Rather than focus their energy on issues such as “control,” “independence (or autonomous operations),” and avoiding the pain of “workforce retraining and reorganization,” ICT managers should consider the benefits outsourcing their physical infrastructure into a data center, and further consider the additional benefits of virtualization and public/enterprise cloud computing.

Companies such as VMWare, AGIT, and Oracle are offering cloud computing consulting and development in Indonesia, and the topic is rapidly gaining momentum in publications and discussions within both the professional IT community, as well as with CFOs and government planning agencies.

It makes sense. As in cloud computing initiatives being driven by the US and other governments, not only consolidating data centers, but also consolidating IT compute resources and storage, makes a lot of sense. Particularly if the government has difficulty standardizing or writing web services to share data. Add a distributed cloud processing model, where two or more data centers with cloud infrastructure are interconnected, and we can now start to drive down recovery time and point objectives close to zero.

Not just for government users, but a company located in Jakarta is able to develop a disaster recovery plan, simply backing up critical data in a remote location, such as IDC Batam (part of the IDC Indonesia group). As an example, the IDC Indonesia group operates 4 data centers located in geographically separate parts of the country, and all are interconnected.

While this does not support all zero recovery time objectives, it does allow companies to lease a cabinet or suite in a commercial data center, and at a minimum install disk systems adequate to meet their critical data restoral needs. It also opens up decent data center collocation space for emerging cloud service and infrastructure providers, all without the burden of legacy systems to refresh.

In a land of volcanoes, typhoons, earthquakes, and man-made disasters Indonesia has a special need for good disaster recovery planning. Through an effort to consolidate organization data centers, the introduction of cloud services in commercial and government markets, and high capacity interconnections between carriers and data centers, the basic elements needed to move forward in Indonesia are now in place.

Communities in the Cloud

In the 1990s community of interest networks (COINs) emerged to take advantage of rapidly developing Internet protocol technologies. A small startup named BizNet on London’s Chiswell Street developed an idea to build a secure, closed network to support only companies operating within the securities and financial industries.

BizNet had some reasonable traction in London, with more than 100 individual companies connecting within the secure COIN. Somewhat revolutionary at the time, and it did serve the needs of their target market. Management was also simple, using software from a small company called IPSwitch and their soon to be globally popular “What’s Up” network management and monitoring utility.

However simplicity was the strength of BizNet. While other companies favored strong marketing campaigns and a lot of flash to attract companies to the Internet age, BizNet’s thought leaders (Jez Lloyd and Nick Holland) relied on a strong commitment to service delivery and excellence, and their success became viral within the financial community based on the confidence they built among COIN members.

As networks go, so did BizNet, which was purchased by Level 3 Communications in 1999 and subsequently the COIN network was dismantled in favor of integrating the individual customers into the Level 3 community.

Cloud Communities

Cloud computing supports the idea of a COIN, as companies can not only build their “virtual data center” within a Platform as a Service/PaaS model, but also develop secure virtual interconnections among companies within a business community – not only within the same cloud service provider (CSP), but also among cloud service providers.

In the “BizNet” version of a COIN, dedicated connections (circuits) were needed to connect routers and switches to a central exchange point run by BizNet. BizNet monitored all connections, reinforcing internal operations centers run by individual companies, and added an additional layer of confidence that helped a “viral” growth of their community.

Gerard Briscoe and Alexandros Marinos delivered a paper in 2009 entitled Digital Ecosystems in the Clouds: Towards Community Cloud Computing.” In addition to discussing the idea of using cloud computing to support an outsourced model of the COIN, the paper also drills deeper into additional areas such as the environmental sustainability of a cloud community.

As each member of the cloud community COIN begins to outsource their virtual data center into the cloud, they are able to begin shutting down inefficient servers while migrating processing requirements into a managed virtual architecture. Even the requirement for managing high performance switching equipment supporting fiber channel and SAN systems is eliminated, with the overall result allowing a significant percentage of costs associated with equipment purchase, software licenses, and support agreements to be rechanneled to customer or business-facing activities.

Perhaps the most compelling potential feature of community clouds is the idea that we can bring processing between business or trading partners within the COIN to near zero, as the interaction between members is on the same system, and will not lose any velocity due to delays induced by going through switching, routing, or short/long distance transmission through the Internet or dedicated circuits.

Standards and a Community Applications Library

Most trading communities and supply chains have a common standard for data representation, process, and interconnection between systems. This may be a system such as RosettaNet for the manufacturing industry, or other similar industry specifications. Within the COIN there should also be a central function that provides the APIs, specifications, and other configurations such as security and web services/interconnection interface specs.

As a function of developing a virtual data center within the PaaS model, standard components supporting the COIN such as firewalls, APIs, and other common applications should be easily accessible for any member, ensuring from the point of implementation that joining the community is a painless experience, and a very rapid method of becoming a full member of the community.

A Marriage of Community GRIDs and Cloud Computing?

Many people are very familiar with project such as Seti At Home, and the World Community GRID. Your desktop computer, servers, or even storage equipment can contribute idle compute and storage capacity to batch jobs supporting everything from searching for extraterrestrial life to AIDS research. You simply register your computer with the target project, download a bit of client software, and the client communicates with a project site to coordinate batch processing of work units/packets.

Now we know our COIN is trying to relieve members from the burden of operating their own data centers – at least those portions of the data center focusing on support of a supply chain or trading community of interest. And some companies are more suited to outsourcing their data center requirements than others. So if we have a mix of companies still operating large data centers with potential sources of unused capacity, and other members in the community cloud with little or no onsite data center capacity, maybe there is a way the community can support itself further by developing the concept of processing capacity as a currency.

As all individual data centers and office LAN/MAN/WANs will have physical connections to the cloud service provider (IaaS provider) through an Internet service provider or dedicated metro Ethernet connection, the virtual data centers being produced within the PaaS portion of the CSP’s will be inherently connectable to any user, or any facility within the COIN. Of course that is accepting that security management will protect non-COIN connected portions of the community.

Virtually, those members of the community with excess capacity within their own networks could then easily further contribute their spare capacity to the community for use as non-time critical compute resource, or for supporting “batch” processing. Some CSPs may even consider buying that capacity to provide members either in the COIN, or outside of the COIN, and additional resource available to their virtual customers as low cost, low performance, batch capacity much like SETI at Home or the Protein Folding Project uses spare capacity on an as-available basis. Much like selling your locally produced energy back into a power GRID.

We Have a New, Blank Cloud White Board to Play With

The BizNet COIN was good. Eleven years after BizNet was dissolved, the concept remains valid, and we now have additional infrastructure that will support COINs through community clouds, with enabling features that extend far beyond the initial vision of BizNet. CSPs such as ScaleUp have built IaaS and PaaS empowerment for COINs within their data center.

Cloud computing is an infant. Well, maybe in Internet years it is rapidly heading to adolescence, but it is still pretty young. Like an adolescent, we know it is powerful, getting more powerful by the day, but few people have the vision to wrap their head around what broadband, cloud computing, diffusion of network-enabled knowledge into the basic education system, and the continuation of Moore’s, Metcalf’s, and other laws of industry and physics.

COINs and community clouds may not have been in the initial discussions of cloud computing, but they are here now. Watching a Slingbox feed in a Jakarta hotel room connected to a television in Burbank was probably not a vision shared by the early adopters of the Internet – and cloud computing will make similar un-thought of leaps in utility and capabilities over the next few years.

However, in the near term, do not be surprised if you see the entire membership of the New York Stock Exchange and NASDAQ operating from a shared cloud COIN. It will work.

Expanding the 4th Utility to Include Cloud Computing

A lot has been said the past couple months about broadband as the fourth utility. The same status as roads, water, and electricity. As an American, the next generation will have broadband network access as an entitlement. But is it enough?

Carr, in “the Big Switch” discusses cloud computing being analogous to the power grid. The only difference is for cloud computing to be really useful, it has to be connected. Connected to networks, homes, businesses, SaaS, and people. So the next logical extension for a fourth utility, beyond simply referring to broadband network access as a basic right for Americans (and others around the world – it just happens as an American for purposes of this article I’ll refer to my own country’s situation), should include additional resources beyond simply delivering bits.

The “New” 4th Utility

So the next logical step is to marry cloud computing resources, including processing capacity, storage, and software as a service, to the broadband infrastructure. SaaS doesn’t mean you are owned by Google, it simply means you have access to those applications and resources needed to fulfill your personal or community objectives, such as having access to centralized e-Learning resources to the classroom, or home, or your favorite coffee shop. The network should simply be there, as should the applications needed to run your life in a wired world.

The data center and network industry will need to develop a joint vision that allows this environment to develop. Data centers house compute utility, networks deliver the bits to and from the compute utility and users. The data center should also be the interconnection point between networks, which at some point in the future, if following the idea of contributing to the 4th utility, will finally focus their construction and investments in delivering big pipes to users and applications.

Relieving the User from the Burden of Big Processing Power

As we continue to look at new home and laptop computers with quad-core processors, more than 8 gigs of memory, and terabyte hard drives, it is hard to believe we actually need that much compute power resting on our knees to accomplish the day-to-day activities we perform online. Do we need a quad core computer to check Gmail or our presentation on Microsoft Live Office?

In reality, very few users have applications that require the amounts of processing and storage we find in our personal computers. Yes, there are some applications such as gaming and very high end rendering which burn processing calories, but for most of the world all we really need is a keyboard and screen. This is what the 4th utility may bring us in the future. All we’ll really need is an interface device connecting to the network, and the processing “magic” will take place in a cloud computing center with processing done on a SaaS application.

The interface device is a desktop terminal, intelligent phone (such as an Android, iPhone, or other wired PDA device), laptop, or anything else that can display and input data.

We won’t really care where the actual storage or processing of our application occurs, as long as the application’s latency is near zero.

The “Network is the Computer” Edges Closer to Reality

Since John Gage coined those famous words while working at Sun Microsystems, we’ve been edging closer to that reality. Through the early days of GRID computing, software as a service, and virtualization – added to the rapid development of the Internet over the past 20 years, technology has finally moved compute resource into the network.

If we are honest with ourselves, we will admit that for 95% of computer users, a server-based application meets nearly all our daily office automation, social media, and entertainment needs. Twitter is not a computer-based application, it is a network-enabled server-based application. Ditto for Facebook, MySpace, LinkedIN, and most other services.

Now the “Network is the Computer” has finally matured into a utility, and at least in the United States, will soon be an entitlement for every resident. It is also another step in the globalization of our communities, as within time no person, country, or point on the earth will be beyond our terminal or input device.

That is good

Developing Countries in the Cloud

Developing countries may be in a great position to take advantage of virtualization and cloud computing. During a recent visit to Indonesia, it was clear the government is struggling with the problem of both building a national ICT plan (Information and Using Cloud Computing to Support EGovernment and eLearningCommunications Technology), as well as consolidating a confusing array of servers, small data centers, and dearth of policies managing the storage and protection of data.

When we consider the need for data protection, considering physical and information security, decentralization of data without adequate modeling for both end user performance, as well as data management is essential in giving the national tools needed to implement eGovernment projects, as well as fully understand implications ICT planning will have for the future economic and social growth of the country.

Considering an E-Government Option Using Cloud Computing

If, as in the case of Indonesia, each governmental organization ranging from the Ministry of Education, to the Ministry of Agriculture, to individual licensing and tax administration offices are running on running on servers which may in fact be connected to normal wall outlets under a desk, you can see we have a challenge, and great opportunity, to create a powerful new ICT infrastructure to lead the country into a new information-based generation.

Let’s consider education as one example. Today, in many developing countries, there is very limited budget available for developing an ICT curriculum. Classrooms consolidate several different classes (year groups), and even text books are limited. However, in many, if not most developing countries, more than 95% of the population is covered by mobile and cellular phone networks.

This means that while there may be limited access to text books, with a bit of creativity we can bring technology to even the most remote locations via wireless access. This was very apparent during a recent conference (Digital Africa), where nearly every country present, including Uganda, Rwanda, Mali, and Chad all indicated aggressive deployments of wireless infrastructure. Here are a couple of simple ideas on the access side:

  1. Take advantage of low cost solar panels to provide electricity and battery backup during daylight hours
  2. Take advantage of bulk discounts, as well as other international donor programs to acquire low cost netbooks or “dumb terminals” for delivery to remote classrooms
  3. Install wireless access points or receivers near the ubiquitous mobile antennas, and where necessary subsidize the mobile carriers to promote installation of data capacity within the mobile networks
  4. Take advantage or E-Learning programs that provide computer-based training and lessons
  5. Centralize the curriculum and student management programs in a central, cloud-based software as a service (SaaS) model in a central or distributed cloud architecture

Now, we can further consider building out two or three data centers in the country, allowing for both load balancing and geographic data backup. Cloud storage, cloud processing, and a high capacity fiber optic backbone interconnecting the facilities. Again, not out of the question, as nearly all countries have, or are developing a fiber backbone that interconnects major metropolitan areas.

So, starting with our eLearning SaaS model, let’s add a couple more simple applications.

If we can produce terminals and electricity for small schools anyplace in the country, why can’t we extend the same model to farmers (eAgriculture), local governments, and individuals through use of “Internet Kiosks” or cafes, possibly located near village offices or police stations? We can, and in fact that is a model being used in countries such as Indonesia, where Internet cafes and kiosks called “WarNets” dot the countryside and urban areas. Many WarNets supplement their electricity with solar energy, and provide Internet access via either fixed lines or wireless.

Cloud Computing Drives the Country

While some may reject the idea of complete standardization of both government and commercial applications at a national level, we can also argue that standardization and records management of the education system may in fact be a good thing. In addition, when a student or adult in Papua (Indonesia) gains the necessary intellectual skills through local eLearning programs, and is able to spend the weekend watching videos or reading through transcripts from the Stanford Education Program for Gifted Youth, the Center for Innovation, or Entrepreneur series.

However when a nation is able to take advantage of an economy of scale that says compute capacity is now a utility, available to all government agencies at a fixed cost, and the nation is able to develop a comprehensive library of SaaS applications that are either developed locally or made available through international agencies such as UNDP, the World Bank, USAID, and others.

With effective use of SaaS, and integration of the SaaS applications on a standardized data base and storage infrastructure, agencies and ministries with small, inefficient, and poorly managed infrastructure have the opportunity for consolidation into a centrally managed, professionally managed, and supported national ICT infrastructure that allows not only the government to operate, but also support the needs of individuals.

With a geographic distributed processing and data center model, disaster recovery becomes easier based on high performance interconnecting backbones allowing data mirroring and synchronization, reducing recovery time and point objectives to near zero.

The US CIO, Vivek Kundra, who manages the world’s largest IT organization (the United States Government), is a cloud believer. Kundra supports the idea of both national and local government standardization of applications and infrastructure, and in fact in a recent Government Technology News interview said he’s “moving forward with plans to create a storefront where federal government agencies could easily acquire standard, secure cloud computing applications.”

This brings a nation’s government to the point where online email, office automation, graphics, storage, database, and hosting services are a standard item that is requested and provisioned in near real time, with a secure, professionally managed infrastructure. It is a good vision of the future that will provide tremendous utility and vision for both developed and developing countries.

I am thinking about a school in Papua, Indonesia. The third year class in Jakarta is no longer in a different league from Papua, as students in both cities are using the same lessons available through the national eLearning system. It is a good future for Indonesia, and a very good example of how cloud computing will help bring developing countries into a competitive, global society and economy.

%d bloggers like this: