It is Time to Get Serious about Architecting ICT

Just finished another ICT-related technical assistance visit with a developing country government. Even in mid-2014, I spend a large amount of time teaching basic principles of enterprise architecture, and the need for adding form and structure to ICT strategies.

Service-oriented architectures (SOA) have been around for quite a long time, with some references going back to the 1980s. ITIL, COBIT, TOGAF, and other ICT standards or recommendations have been around for quite a long time as well, with training and certifications part of nearly every professional development program.

So why is the idea of architecting ICT infrastructure still an abstract to so many in government and even private industry? It cannot be the lack of training opportunities, or publicly available reference materials. It cannot be the lack of technology, or the lack of consultants readily willing to assist in deploying EA, SOA, or interoperability within any organization or industry cluster.

During the past two years we have run several Interoperability Readiness Assessments within governments. The assessment initially takes the form of a survey, and is distributed to a sample of 100 or more participants, with positions ranging from administrative task-based workers, to Cxx or senior leaders within ministries and government agencies.

Questions range from basic ICT knowledge to data sharing, security, and decision support systems.

While the idea of information silos is well-documented and understood, it is still quite surprising to see “siloed” attitudes are still prevalent in modern organizations.  Take the following question:

Question on Information Sharing

This question did not refer to sharing data outside of the government, but rather within the government.  It indicates a high lack of trust when interacting with other government agencies, which will of course prevent any chance of developing a SOA or facilitating information sharing among other agencies.  The end result is a lower level of both integrity and value in national decision support capability.

The Impact of Technology and Standardization

Most governments are considering or implementing data center consolidation initiatives.  There are several good reasons for this, including:

  • Cost of real estate, power, staffing, maintenance, and support systems
  • Transition from CAPEX-based ICT infrastructure to OPEX-based
  • Potential for virtualization of server and storage resources
  • Standardized cloud computing resources

While all those justifications for data center consolidation are valid, the value potentially pales in comparison of the potential of more intelligent use of data across organizations, and even externally to outside agencies.  To get to this point, one senior government official stated:

“Government staff are not necessarily the most technically proficient.  This results in reliance on vendors for support, thought leadership, and in some cases contractual commitments.  Formal project management training and certification are typically not part of the capacity building of government employees.

Scientific approaches to project management, especially ones that lend themselves to institutionalization and adoption across different agencies will ensure a more time-bound and intelligent implementation of projects. Subsequently, overall knowledge and technical capabilities are low in government departments and agencies, and when employees do gain technical proficiency they will leave to join private industry.”

There is also an issue with a variety of international organizations going into developing countries or developing economies, and offering no or low cost single-use ICT infrastructure, such as for health-related agencies, which are not compatible with any other government owned or operated applications or data sets.

And of course the more this occurs, the more difficult it is for government organizations to enable interoperability or data sharing, and thus the idea of an architecture or data sharing become either impossible or extremely difficult to implement or accomplish.

The Road to EA, SOAs, and Decision Support

There are several actions to take on the road to meeting our ICT objectives.

  1. Include EA, service delivery (ITIL), governance (COBIT), and SOA training in all university and professional ICT education programs.  It is not all about writing code or configuring switches, we need to ensure a holistic understanding of ICT value in all ICT education, producing a higher level of qualified graduates entering the work force.
  2. Ensure government and private organizations develop or adopt standards or regulations which drive enterprise architecture, information exchange models, and SOAs as a basic requirement of ICT planning and operations.
  3. Ensure executive awareness and support, preferably through a formal position such as the Chief Information Officer (CIO).  Principles developed and published via the CIO must be adopted and governed by all organizations,
    Nobody expects large organizations, in particular government organizations, to change their cultures of information independence overnight.  This is a long term evolution as the world continues to better understand the value and extent of value within existing data sets, and begin creating new categories of data.  Big data, data analytics, and exploitation of both structured and unstructured data will empower those who are prepared, and leave those who are not prepared far behind.
    For a government, not having the ability to access, identify, share, analyze, and address data created across agencies will inhibit effective decision support, with potential impact on disaster response, security, economic growth, and overall national quality of life.
    If there is a call to action in this message, it is for governments to take a close look at how their national ICT policies, strategies, human capacity, and operations are meeting national objectives.  Prioritizing use of EA and supporting frameworks or standards will provide better guidance across government, and all steps taken within the framework will add value to the overall ICT capability.

Pacific-Tier Communications LLC provides consulting to governments and commercial organizations on topics related to data center consolidation, enterprise architecture, risk management, and cloud computing.

Asian Carrier’s Conference 2013 Kicks Off in Cebu

ACC 2013The 2013 ACC kicked off on Tuesday morning with an acknowledgement by Philippine Long Distance Telecommunications (PLDT) CEO Napolean L. Nazareno that “we’re going through a profound and painful transformation to digital technologies.” He continued to explain that in addition to making the move to a digital corporate culture and architecture that for traditional telcos to succeed they will need to “master new skills, including new partnership skills.”

That direction drives a line straight down the middle of attendees at the conference. Surprisingly, many companies attending and advertising their products still focus on “minutes termination,” and traditional voice-centric relationships with other carriers and “voice” wholesalers.

Philippe MilletMatthew Howett, Regulation and Policy Practice Leader for Ovum Research noted ”while fixed and mobile minutes are continuing to grow, traditional voice revenue is on the decline.” He backed the statement up with figures including “Over the Top/OTT” services, which are when a service provider sends all types of communications, including video, voice, and other connections, over an Internet protocol network – most commonly over the public Internet.

Howett informed the ACC’s plenary session attendees that Ovum Research believes up to US$52 billion will be lost in traditional voice revenues to OTT providers by 2016, and an additional US$32,6 billion to instant messaging providers in the same period.

The message was simple to traditional communications carriers – adapt or become irrelevant. National carriers may try to work with government regulators to try and adopt legal barriers to prevent the emergence of OTTs operating in that country, however that is only a temporary step to stem the flow of “technology-enabled” competition and retain revenues.

As noted by Nazareno, the carriers must wake up to the reality we are in a global technology refresh cycle and business visions, expectations, and construct business plans that will not only allow the company to survive, but also meet the needs of their users and national objectives.

Kevin Vachon, MEFMartin Geddes, owner of Martin Geddes Consulting, introduced the idea of “Task Substitution.’” Task Substitution occurs when an individual or organization is able to use a substitute technology or process to accomplish tasks that were previously only available from a single source. One example is the traditional telephone call. In the past you would dial a number, and the telephone company would go through a series of connections, switches, and processes that would both connect two end devices, as well as provide accounting for the call.

The telephone user now has many alternatives to the traditional phone call – all task substitutions. You can use Skype, WebEx, GoToMeeting, instant messaging – any one of a multitude of utilities allowing an individual or group to participate in one to one or many to many communications. When a strong list of alternative methods to complete a task exist, then the original method may become obsolete, or have to rapidly adapt to avoid being discarded by users.

A strong message, which made many attendees visibly uncomfortable.

Ivan Landen, Managing Director at Asia-Pacific Expereo, described the telecom revolution in terms all attendees could easily visualize. “Today around 80% of the world’s population have access to the electrical grid/s, while more than 85% of the population has access to Wireless”

Ivan Landen, ExpereoHe also provided an additional bit of information which did not surprise attendees, but also made some of the telecom representatives a bit uneasy. In a survey Geddes conducted he discovered that more than 1/2 of business executives polled admitted their Internet access was better at their homes than in their offices.” This information can be analyzed in several different ways, from having poor IT planning with the company, to poor UT capacity management within the communication provider, to the reality traffic on consumer networks is simply lower during the business day than during other time periods.

However the main message was “there is a huge opportunity for communication companies to fix business communications.”

The conference continues until Friday. Many more sessions, many more perimeter discussions, and a lot of space for the telecom community to come to grips with the reality “we need to come to grips with the digital world.”

What Value Can I Expect from Cloud Computing Training?

Cloud Computing ClassroomNormally, when we think of technical-related training, images of rooms loaded with switches, routers, and servers might come to mind.    Cloud computing is different.  In reality, cloud computing is not a technology, but rather a framework employing a variety of technologies – most notably virtualization, to solve business problems or enable opportunities.

From our own practice, the majority of cloud training students represent non-technical careers and positions. Our training does follow the CompTIA Cloud Essentials course criterion, and is not a technical course, so the non-technical student trend should not come as any big surprise. 

What does come as a surprise is how enthusiastically our students dig into the topic.  Whether business unit managers, accounting and finance, sales staff, or executives, all students come into class convinced they need to know about cloud computing as an essential part of their future career progression, or even at times to ensure their career survival.

Our local training methodology is based on establishing an indepth knowledge of the NIST Cloud Definitions and Cloud Reference Architecture.  Once the students get beyond a perception such documents are too complex, and that we will refer nearly all aspects of training to both documents, we easily establish a core cloud computing knowledge base needed to explore both technical aspects, and more importantly practical aspects of how cloud computing is used in our daily lives, and likely future lives.

This is not significantly different than when we trained business users on how to use, employ, and exploit  the Internet in the 90s.  Those of us in engineering or technical operations roles viewed this type of training with either amusement or contempt, at times mocking those who did not share our knowledge and experience of internetworking, and ability to navigate the Internet universe.

We are in the same phase of absorbing and developing tacit knowledge of compute and storage access on demand, service-oriented architectures, Software as a Service, the move to a subscription-based application world.

Hamster Food as a Service (HFaaS)Those students who attend cloud computing training leave the class better able to engage in decision-making related to both personal and organizational information and communication technology, and less exposed to the spectrum of cloud washing, or marketing use of “cloud” and “XXX as a Service”  language overwhelming nearly all media on subjects ranging from hamster food to SpaceX and hyper loops.

Even the hardest core engineers who have degraded themselves to join a non-technical business-oriented cloud course walk away with a better view on how their tools support organizational agility (good jargon, no?), in addition to the potential financial impacts, reduced application development cycles, disaster recovery, business continuity, and all the other potential benefits to the organization when adopting cloud computing.

Some even walk away from the course planning a breakup with some of their favorite physical servers.

The Bottom Line

No student has walked away from a cloud computing course knowing less about the role, impact, and potential of implementing cloud in nearly any organization.  While the first few hours of class embrace a lot of great debates on the value of cloud computing, by the end of the course most students agree they are better prepared to consider, envision, evaluate, and address the potential or shortfalls of cloud computing.

Cloud computing is, and will continue to have influence on many aspects of our lives. It is not going away anytime soon.  The more we can learn, either through self-study or resident training, the better position we’ll be in to make intelligent decisions regarding the use and value of cloud in our lives and organizations.

Connecting at the Westin Building Exchange in Seattle

Seattle Washington - Home of WBXInternational telecommunication carriers all share one thing in common – the need to connect with other carriers and networks.  We want to make calls to China, a video conference in Moldova, send an email message for delivery within 5 seconds to Australia – all possible with our current state of global communications.  Magic?  Of course not.  While an abstract to most, the reality is telecommunications physical infrastructure extends to nearly every corner of the world, and communications carriers bring this global infrastructure together at  a small number of facilities strategically placed around the world informally called “carrier hotels.”

Pacific-Tier had the opportunity to visit the Westin Building Exchange (commonly known as the WBX), one of the world’s busiest carrier hotels, in early August.   Located in the heart of Seattle’s bustling business district, the WBX stands tall at 34 stories.  The building also acts as a crossroads of the Northwest US long distance terrestrial cable infrastructure, and is adjacent to trans-Pacific submarine cable landing points.

The world’s telecommunications community needs carrier hotels to interconnect their physical and value added networks, and the WBX is doing a great job in facilitating both physical interconnections between their more than 150 carrier tenants.

“We understand the needs of our carrier and network tenants” explained Mike Rushing,   Business Development Manager at the Westin Building.  “In the Internet economy things happen at the speed of light.  Carriers at the WBX are under constant pressure to deliver services to their customers, and we simply want to make this part of the process (facilitating interconnections) as easy as possible for them.”

Main Distribution Frame at WBXThe WBX community is not limited to carriers.  The community has evolved to support Internet Service Providers, Content Delivery Networks (CDNs), cloud computing companies, academic and research networks, enterprise customers, public colocation and data center operators, the NorthWest GigaPOP, and even the Seattle Internet Exchange Point (SIX), one of the largest Internet exchanges in the world.

“Westin is a large community system,” continued Rushing.  “As new carriers establish a point of presence within the building, and begin connecting to others within the tenant and accessible community, then the value of the WBX community just continues to grow.”

The core of the WBX is the 19th floor meet-me-room (MMR).  The MMR is a large, neutral, interconnection point for networks and carriers representing both US and international companies.  For example, if China Telecom needs to connect a customer’s headquarters in Beijing to an office in Boise served by AT&T, the actual circuit must transfer at a physical demarcation point from China Telecom  to AT&T.  There is a good chance that physical connection will occur at the WBX.

According to Kyle Peters, General Manager of the Westin Building, “we are supporting a wide range of international and US communications providers and carriers.  We fully understand the role our facility plays in supporting not only our customer’s business requirements, but also the role we play in supporting global communications infrastructure.”

You would be correct in assuming the WBX plays an important role in that critical US and global communications infrastructure.  Thus you would further expect the WBX to be constructed and operated in a manner providing a high level of confidence to the community their installed systems will not fail.

Lance Forgey, Director of Operations at the WBX, manages not only the MMR, but also the massive mechanical (air conditioning) and electrical distribution systems within the building.  A former submarine engineer, Forgey runs the Westin Building much like he operated critical systems within Navy ships.  Assisted by an experienced team of former US Navy engineers and US Marines, the facility presents an image of security, order, cleanliness, and operational attention to detail.

“Our operations and facility staff bring the discipline of many years in the military, adding innovation needed to keep up with our customer’s industries” said Forgey.  “Once you have developed a culture of no compromise on quality, then it is easy keep things running.”

That is very apparent when you walk through the site – everything is in its place, it is remarkably clean, and it is very obvious the entire site is the product of a well-prepared plan.

WBX GeneratorsOne area which stands out at the WBX is the cooling and electrical distribution infrastructure.  With space within adjacent external parking structures and additional areas outside of the building most heavy equipment is located outside of the building, providing an additional layer of physical security, and allowing the WBX to recover as much space within the building as possible for customer use.

“Power is not an issue for us”  noted Forgey.  “It is a limiting factor for much of our industry, however at the Westin Building we have plenty, and can add additional power anytime the need arises.”

That is another attraction for the WBX versus some of the other carrier hotels on the West Coast of the US.  Power in Washington State averages around $0.04/kWH, while power in California may be nearly three times as expensive.

“In addition to having all the interconnection benefits similar operations have on the West Coast, the WBX can also significantly lower operating costs for tenants” added Rushing.  As the cost of power is a major factor in data center operations, reducing the cost of operations through a significant reduction in the cost of power is a big issue.

The final area carrier hotels need to address is the ever changing nature of communications, including interconnections between members of the WBX community.  Nothing is static, and the WBX team is constantly communicating with tenants, evaluating changes in supporting technologies, and looking for ways to ensure they have the tools available to meet their rapidly changing environments.

Cloud computing, software-defined networking, carrier Ethernet – all  topics which require frequent communication with tenants to gain insight into their visions, concerns, and plans.  The WBX staff showed great interest in cooperating with their tenants to ensure the WBX will not impede development or implementation of new  technologies, as well as attempt to stay ahead of their customer deployments.

“If a customer comes to us and tells us they need a new support infrastructure or framework with very little lead time, then we may not be able to respond quickly enough to meet their requirements” concluded Rushing.  “Much better to keep an open dialog with customers and become part of their team.”

Pacific-Tier has visited, and evaluated dozens of data centers during the past four years.  Some have been very good, some have been very bad.  Some have gone over the edge in data center deployments, chasing the “grail” of a Tier IV data center certification, while some have been little more than a server closet.

The Westin Building (WBX)The Westin Building / WBX is unique in the industry.  Owned by both Clise Properties of Seattle and Digital Realty Trust,  the Westin Building brings the best of both the real estate world and data centers into a single operation.  The quality of mechanical and electrical infrastructure, the people maintaining the infrastructure, and the vision of the company give a visitor an impression that not only is the WBX a world-class facility, but also that all staff and management know their business, enjoy the business, and put their customers on top as their highest priority.

As Clise Properties owns much of the surrounding land, the WBX has plenty of opportunity to grow as the business expands and changes.  “We know cloud computing companies will need to locate close to the interconnection points, so we better be prepared to deliver additional high-density infrastructure as their needs arise” said Peters.  And in fact Clise has already started planning for their second colocation building.  This building, like its predecessor, will be fully interconnected with the Westin Building, including virtualizing the MMR distribution frames in each building into a single cross interconnection environment.

Westin WBX LogoWBX offers the global telecom industry an alternative to other carrier hotels in Los Angeles and San Francisco. One shortfall in the global telecom industry are the “single threaded” links many have with other carriers in the global community.  California has the majority of North America / Asia carrier interconnections today, but all note California is one of the world’s higher risk options for building critical infrastructure, with the reality it is more a matter of “when” than “if” a catastrophic event such as an earthquake occurs which could seriously disrupt international communications passing through one of the region’s MMRs.

The telecom industry needs to have the option of alternate paths of communications and interconnection points.  While the WBX stands tall on its own as a carrier hotel and interconnection site, it is also the best alternative and diverse landing point for trans-Pacific submarine cable capacity – and subsequent interconnections.

The WBX offers a wide range of customer services, including:

  • Engineering support
  • 24×7 Remote hands
  • Fast turn around for interconnections
  • Colocation
  • Power circuit monitoring and management
  • Private suites and lease space for larger companies
  • 24×7 security monitoring and access control

Check out the Westin Building and WBX the next time you are in Seattle, or if you want to learn more about the telecom community revolving and evolving in the Seattle area.  Contact Mike Rushing at mrushing@westinbldg.com for more information.

 

Big Data Skills – Opportunities and Threats

Big data.  There are few conversations in the IT community which do not start, address, or end on the topic.  Some conversations are visionary in nature, some critical, and many considering the challenges we’ll need to overcome in the process of understanding how to deal with big data.

BigData2Definitions of big data almost parallel the 3 V’s of big data, which are velocity, volume, and variety.  We may call it a byproduct of massive data growth, cheap technology, and our ability to save everything.  We may address the enormous volumes of data generated by social media, smart grids, and other online transactions. 

However we also need to compress those ideas into a form we can understand and use as a basis for our discussions.  For simplicity, we’ll use Gartner’s definition that “big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision-making, insight discovery and process optimization.”

Enhanced decision-making?  Insight discovery?  Process optimization?  Outstanding opportunities to enhance our decision support process.  Just corral all the data available to us, and voila, we’re competitive on a global scale.

One minor snag – we’ll need to recruit skilled technical and business staff with both the skills and business experience needed to effectively exploit the potential of big data, as well as establish value to business, government, and quality of life.

In a well-quoted study by the McKinsey Global Institute (MGI), data is presented supporting the idea big data to ultimately become a key factor in competition, and competitiveness across all public and private sectors.  The study also states that just in the United States we will have a shortfall of nearly 190,000 skilled professionals in deep analytics, coders, and mathematical abilities.

The Australian Workspend Institute acknowledges there is a shortage of talent, and that shortage is getting worse. Stating that the Australian mining, oil, and gas industries have identified a shortfall of 100,000 skilled engineers, the “war for talent” is further running up the cost of labor for staff with deep analytical skills across-the-board.

Boston company NewVantage Partners published the results of an executive survey of leading Fortune 1000 companies adding only 2% of those listed companies felt they had no challenges finding talent and skilled resources needed to understand and exploit big data.

Is it really that important to be on top of big data?

A 29 March 2012 press release from the Executive Office of the President (US) unveiling the country’s “Big Data Research and Development  Initiative” stated “by improving our ability to extract knowledge and insights from large and complex collections of digital data, the initiative promises to solve some of the nation’s most pressing problems.”

Perhaps crime?  BBC Horizon’s aired a special on big data in early 2013, highlighting the efforts of Los Angeles Police Department’s (LAPD) use of big data and analytics in predicting crime.  Collecting crime data from over 30 million records in LA, demographics, geospatial, and other historical data from both LA and around the world, LAPD uses analytics to predict down to 300sqft when and where a crime will likely occur.  With uncanny, almost creepy results.

Health information?  Climate change?  Mining? Earthquakes? Planets which may support human life?

Given the amount of data available, data being created, and the ability of intelligent data scientists to create models of the data value, most believe those who can harvest big data will gain significant advantage.

So what do we do, give up on big data and focus on other activities?

In a recent Forbes article on big data author Ben Woo from Neuralytix wrote ”when it comes to big data, we believe if you’re not doing it, your competitors are!

The United States still holds a significant advantage, leading the world in individuals with skill and experience in deep analytics.  According to the MGI the US, China, India, and Russia lead the world in gross numbers of capable data scientists, while Poland and Romania are graduating the highest numbers of students skilled in deep analytics.

As noted, as the world continues to embrace the science of data, those skilled individuals will continue in demand.  CIO Magazine notes that “CIOs are competing for workers with strong math skills, proficiency working with massive data bases, and emerging database technologies, in addition to workers with expertise in search, data, integration, and business knowledge.”

Those skills CIO believe are the most difficult to find and recruit include:

  • Advanced analytics and predictive analysis skills
  • Complex event processing skills
  • Rule management skills
  • Business intelligence (BI) skills
  • Data integration skills

David Foote, Foote Partner’s Chief Research Officer writes that “many colleges and universities haven’t yet risen to the challenge of teaching the skills that are potentially needed for analytics jobs. 

Industry may be rising to the need, partnering with universities to form the National Consortium for Data Science (NCDS).  According to a paper done by the University of North Carolina Kenan-Flagler Business School (UNC), the agenda for NCDS is to better align the university programs with the needs of private sector (and government).

UNC further advises 4 steps human resources and talent management organizations can take to bridge skills and talent shortfalls in deep data and big data analytics include:

  1. Educating themselves about big data.  If the organization does not understand big data at an operational level, it is unlikely they will be able to recruit skilled data scientists who do understand the concept.
  2. Educate managers and senior leaders about big data.
  3. Develop creative strategies to recruit and retain big data talent.
  4. Offer plans and solutions on how to build the talent in-house.

NOTE:  As a particularly positive note for Southern California readers, California State University – Long Beach (Long Beach State ) was identified in UNC’s report as one of the few schools in the world with a strong , focused deep data analytics program.

While there is no immediate solution to the global shortfall in big data talent, we are finally awakening to the need.  The academic community will need to consider, design, and implement curriculum and programs to acknowledge the need for graduates capable of dealing with big data.  This means back to the basic of mathematics, statistics, analytics, and other disciplines which will graduate capable data scientists.

The Internet, computers, and ability to collect data has changed our world, and created many difficult challenges in our ability to understand the opportunities.  Time to step up to the challenge.

The Value of Cloud Computing Certifications

A good indication any new technology or business model is starting to mature is the number of certifications popping up related to that product, framework, or service.   Cloud computing is certainly no exception, with vendors such as Microsoft, Google, VMWare, and IBM offering Cloud Computing Certificationscertification training for their own products, as well as organizations such CompTIA and Architura competing for industry neutral certifications.

Is this all hype, or is it an essential part of the emerging cloud computing ecosystem?  Can we remember the days when entry level Cisco, Microsoft, or other vendor certifications were almost mocked by industry elitists?

Much like the early Internet days of eEverything, cloud computing is at the point where most have heard the term, few understand the concepts, and marketing folk are exploiting every possible combination of the words to place their products in a favorable, forward leaning light.

So, what if executive management takes a basic course in cloud computing principles, or sales and customer service people take a Cloud 101 course?  Is that bad?

Of course not.  Cloud computing has the potential of being transformational to business, governments, organization, and even individuals.  Business leaders need to understand the potential and impact of what a service-oriented cloud computing infrastructure might mean to their organization, the game-changing potential of integration and interoperability, the freedom of mobility, and the practical execution of basic cloud computing characteristics within their ICT environment.

A certification is not all about getting the test, and certificate.  As an instructor for the CompTIA course, I manage classes of 20 or more students ranging from engineers, to network operations center staff, to customer service and sales, to mid-level executives.  We’ve yet to encounter an individual who claims they have learned nothing from attending the course, and most leave the course with a very different viewpoint of cloud computing than held prior to the class.

As with most technology driven topics, cloud computing does break into different branches – including technical, operations, and business utility.

The underlying technologies of cloud computing are probably the easiest part of the challenge, as ultimately skills will develop based on time, experience, and operation of cloud-related technologies.

The more difficult challenge is understanding the impact of cloud computing may mean to an organization, both internally as well as on a global scale.  No business-related discussion of cloud computing is complete without consideration of service-oriented architectures, enterprise architectures, interoperability, big data, disaster management, and continuity of operations.

Business decisions on data center consolidation, ICT outsourcing, and other aspects of the current technology refresh or financial consideration will be more effective and structured when accompanied by a basic business and high level understanding of cloud computing underlying technologies.  As an approach to business transformation, additional complimentary capabilities in enterprise architecture, service-oriented architectures, and IT service management will certainly help senior decision makers best understand the relationship between cloud computing and their organizational planning.

While reading the news, clipping stories, and self-study may help decision makers understand the basic components of cloud computing and other supporting technologies. Taking an introduction cloud computing course, regardless if vendor training or neutral, will give enough background knowledge to at least engage in the conversation. Given the hype surrounding cloud computing, and the potential long term consequences of making an uniformed decision, the investment in cloud computing training must be considered valuable at all levels of the organization, from technical to senior management.

ICT Modernization Planning

ICT ModernizationThe current technology refresh cycle presents many opportunities, and challenges to both organizations and governments.  The potential of service-oriented architectures, interoperability, collaboration, and continuity of operations is an attractive outcome of technologies and business models available today.  The challenges are more related to business processes and human factors, both of which require organizational transformations to take best advantage of the collaborative environments enabled through use of cloud computing and access to broadband communications.

Gaining the most benefit from planning an interoperable environment for governments and organizations may be facilitated through use of business tools such as cloud computing.  Cloud computing and underlying technologies may create an operational environment supporting many strategic objectives being considered within government and private sector organizations.

Reaching target architectures and capabilities is not a single action, and will require a clear understanding of current “as-is” baseline capabilities, target requirements, the gaps or capabilities need to reach the target, and establishing a clear transitional plan to bring the organization from a starting “as-is” baseline to the target goal.

To most effectively reach that goal requires an understanding of the various contributing components within the transformational ecosystem.  In addition, planners must keep in mind the goal is not implementation of technologies, but rather consideration of technologies as needed to facilitate business and operations process visions and goals.

Interoperability and Enterprise Architecture

Information technology, particularly communications-enabled technology has enhanced business process, education, and the quality of life for millions around the world.  However, traditionally ICT has created silos of information which is rarely integrated or interoperable with other data systems or sources.

As the science of enterprise architecture development and modeling, service-oriented architectures, and interoperability frameworks continue to force the issue of data integration and reuse, ICT developers are looking to reinforce open standards allowing publication of external interfaces and application programming interfaces.

Cloud computing, a rapidly maturing framework for virtualization, standardized data, application, and interface structure technologies, offers a wealth of tools to support development of both integrated and interoperable ICT  resources within organizations, as well as among their trading, shared, or collaborative workflow community.

The Institute for Enterprise Architecture Development defines enterprise architecture (EA) as a “complete expression of the enterprise; a master plan which acts as a collaboration force between aspects of business planning such as goals, visions, strategies and governance principles; aspects of business operations such as business terms, organization structures, processes and data; aspects of automation such as information systems and databases; and the enabling technological infrastructure of the business such as computers, operating systems and networks”

ICT, including utilities such as cloud computing, should focus on supporting the holistic objectives of organizations implementing an EA.  Non-interoperable or shared data will generally have less value than reusable data, and will greatly increase systems reliability and data integrity.

Business Continuity and Disaster Recovery (BCDR)

Recent surveys of governments around the world indicate in most cases limited or no disaster management or continuity of operations planning.  The risk of losing critical national data resources due to natural or man-made disasters is high, and the ability for most governments maintain government and citizen services during a disaster is limited based on the amount of time (recovery time objective/RTO) required to restart government services, as well as the point of data restoral (recovery point objective /RPO).

In existing ICT environments, particularly those with organizational and data resource silos,  RTOs and RPOs can be extended to near indefinite if both a data backup plan, as well as systems and service restoral resource capacity is not present.  This is particularly acute if the processing environment includes legacy mainframe computer applications which do not have a mirrored recovery capacity available upon failure or loss of service due to disaster.

Cloud computing can provide a standards-based environment that fully supports near zero RTO/RPO requirements.  With the current limitation of cloud computing being based on Intel-compatible architectures, nearly any existing application or data source can be migrated into a virtual resource pool.   Once within the cloud computing Infrastructure as a Service (IaaS) environment, setting up distributed processing or backup capacity is relatively uncomplicated, assuming the environment has adequate broadband access to the end user and between processing facilities.

Cloud computing-enabled BCDR also opens opportunities for developing either PPPs, or considering the potential of outsourcing into public or commercially operated cloud computing compute, storage, and communications infrastructure.  Again, the main limitation being the requirement for portability between systems.

Transformation Readiness

ICT modernization will drive change within all organizations.  Transformational readiness is not a matter of technology, but a combination of factors including rapidly changing business models, the need for many-to-many real-time communications, flattening of organizational structures, and the continued entry of technology and communications savvy employees into the workforce.

The potential of outsourcing utility compute, storage, application, and communications will eliminate the need for much physical infrastructure, such as redundant or obsolete data centers and server closets.  Roles will change based on the expected shift from physical data centers and ICT support hardware to virtual models based on subscriptions and catalogs of reusable application and process artifacts.

A business model for accomplishing ICT modernization includes cloud computing, which relies on technologies such as server and storage resource virtualization, adding operational characteristics including on-demand resource provisioning to reduce the time needed to procure ICT resources needed to respond to emerging operational  or other business opportunities.

IT management and service operations move from a workstation environment to a user interface driven by SaaS.  The skills needed to drive ICT within the organization will need to change, becoming closer to the business, while reducing the need to manage complex individual workstations.

IT organizations will need to change, as organizations may elect to outsource most or all of their underlying physical data center resources to a cloud service provider, either in a public or private environment.  This could eliminate the need for some positions, while driving new staffing requirements in skills related to cloud resource provisioning, management, and development.

Business unit managers may be able to take advantage of other aspects of cloud computing, including access to on-demand compute, storage, and applications development resources.  This may increase their ability to quickly respond to rapidly changing market conditions and other emerging opportunities.   Business unit managers, product developers, and sales teams will need to become familiar with their new ICT support tools.  All positions from project managers to sales support will need to quickly acquire skills necessary to take advantage of these new tools.

The Role of Cloud Computing

Cloud computing is a business representation of a large number of underlying technologies.  Including virtualization, development environment, and hosted applications, cloud computing provides a framework for developing standardized service models, deployment models, and service delivery characteristics.

The US National Institute of Standards and Technology (NIST) provides a definition of cloud computing accepted throughout the ICT industry.

“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.“

While organizations face decisions related to implementing challenges related to developing enterprise architectures and interoperability, cloud computing continues to rapidly develop as an environment with a rich set of compute, communication, development, standardization, and collaboration tools needed to meet organizational objectives.

Data security, including privacy, is different within a cloud computing environment, as the potential for data sharing is expanded among both internal and potentially external agencies.  Security concerns are expanded when questions of infrastructure multi-tenancy, network access to hosted applications (Software as a Service / SaaS), and governance of authentication and authorization raise questions on end user trust of the cloud provider.

A move to cloud computing is often associated with data center consolidation initiatives within both governments and large organizations.  Cloud delivery models, including Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) support the development of virtual data centers.

While it is clear long term target architectures for most organizations will be an environment with a single data system, in the short term it may be more important to decommission high risk server closets and unmanaged servers into a centralized, well-managed data center environment offering on-demand access to compute, storage, and network resources – as well as BCDR options.

Even at the most basic level of considering IaaS and PaaS as a replacement environment to physical infrastructure, the benefits to the organization may become quickly apparent.  If the organization establishes a “cloud first” policy to force consolidation of inefficient or high risk ICT resources, and that environment further aligns the organization through the use of standardized IT components, the ultimate goal of reaching interoperability or some level of data integration will become much easier, and in fact a natural evolution.

Nearly all major ICT-related hardware and software companies are re-engineering their product development to either drive cloud computing, or be cloud-aware.  Microsoft has released their Office 365 suite of online and hosted environments, as has Google with both PaaS and SaaS tools such as the Google Apps Engine and Google Docs.

The benefits of organizations considering a move to hosted environments, such as MS 365, are based on access to a rich set of applications and resources available on-demand, using a subscription model – rather than licensing model, offering a high level of standardization to developers and applications.

Users comfortable with standard office automation and productivity tools will find the same features in a SaaS environment, while still being relieved of individual software license costs, application maintenance, or potential loss of resources due to equipment failure or theft.  Hosted applications also allow a persistent state, collaborative real-time environment for multi-users requiring access to documents or projects.  Document management and single source data available for reuse by applications and other users, reporting, and performance management becomes routine, reducing the potential and threat of data corruption.

The shortfalls, particularly for governments, is that using a large commercial cloud infrastructure and service provider such as Microsoft  may require physically storing data in location outside of their home country, as well as forcing data into a multi-tenant environment which may not meet security requirements for organizations.

Cloud computing offers an additional major feature at the SaaS level that will benefit nearly all organizations transitioning to a mobile workforce.  SaaS by definition is platform independent.  Users access SaaS applications and underlying data via any device offering a network connection, and allowing access to an Internet-connected address through a browser.    The actual intelligence in an application is at the server or virtual server, and the user device is simply a dumb terminal displaying a portal, access point, or the results of a query or application executed through a command at the user screen.

Cloud computing continues to develop as a framework and toolset for meeting business objectives.  Cloud computing is well-suited to respond to rapidly changing business and organizational needs, as the characteristics of on-demand access to infrastructure resources, rapid elasticity, or the ability to provision and de-provision resources as needed to meet processing and storage demand, and organization’s ability to measure cloud computing resource use for internal and external accounting mark a major change in how an organization budgets ICT.

As cloud computing matures, each organization entering a technology refresh cycle must ask the question “are we in the technology business, or should we concentrate our efforts and budget in efforts directly supporting realizing objectives?”  If the answer is the latter, then any organization should evaluate outsourcing their ICT infrastructure to an internal or commercial cloud service provider.

It should be noted that today most cloud computing IaaS service platforms will not support migration of mainframe applications, such as those written for a RISC processor.  Those application require redevelopment to operate within an Intel-compatible processing environment.

Broadband Factor

Cloud computing components are currently implemented over an Internet Protocol network.  Users accessing SaaS application will need to have network access to connect with applications and data.  Depending on the amount of graphics information transmitted from the host to an individual user access terminal, poor bandwidth or lack of broadband could result in an unsatisfactory experience.

In addition, BCDR requires the transfer of potentially large amounts of data between primary and backup locations. Depending on the data parsing plan, whether mirroring data, partial backups, full backups, or live load balancing, data transfer between sites could be restricted if sufficient bandwidth is not available between sites.

Cloud computing is dependent on broadband as a means of connecting users to resources, and data transfer between sites.  Any organization considering implementing cloud computing outside of an organization local area network will need to fully understand what shortfalls or limitations may result in the cloud implementation not meeting objectives.

The Service-Oriented Cloud Computing Infrastructure (SOCCI)

Governments and other organizations are entering a technology refresh cycle based on existing ICT hardware and software infrastructure hitting the end of life.  In addition, as the world aggressively continues to break down national and technical borders, the need for organizations to reconsider the creation, use, and management of data supporting both mission critical business processes, as well as decision support systems will drive change.

Given the clear direction industry is taking to embrace cloud computing services, as well as the awareness existing siloed data structures within many organizations would better serve the organization in a service-oriented  framework, it makes sense to consider an integrated approach.

A SOCCI considers both, adding reference models and frameworks which will also add enterprise architecture models such as TOGAF to ultimately provide a broad, mature framework to support business managers and IT managers in their technology and business refresh planning process.

SOCCIs promote the use of architectural building blocks, publication of external interfaces for each application or data source developed, single source data, reuse of data and standardized application building block, as well as development and use of enterprise service buses to promote further integration and interoperability of data.

A SOCCI will look at elements of cloud computing, such as virtualized and on-demand compute/storage resources, and access to broadband communications – including security, encryption, switching, routing, and access as a utility.  The utility is always available to the organization for use and exploitation.  Higher level cloud components including PaaS and SaaS add value, in addition to higher level entry points to develop the ICT tools needed to meet the overall enterprise architecture and service-orientation needed to meet organizational needs.

According to the Open Group a SOCCI framework provides the foundation for connecting a service-oriented infrastructure with the utility of cloud computing.  As enterprise architecture and interoperability frameworks continue to gain in value and importance to organizations, this framework will provide additional leverage to make best use of available ICT tools.

The Bottom Line on ICT Modernization

The Internet Has reached nearly every point in the world, providing a global community functioning within an always available, real-time communications infrastructure.  University and primary school graduates are entering the workforce with social media, SaaS, collaboration, and location transparent peer communities diffused in their tacit knowledge and experience.

This environment has greatly flattened any leverage formerly developed countries, or large monopoly companies have enjoyed during the past several technology and market cycles.

An organization based on non-interoperable or standardized data, and no BCDR protection will certainly risk losing a competitive edge in a world being created by technology and data aware challengers.

Given the urgency organizations face to address data security, continuity of operations, agility to respond to market conditions, and operational costs associated with traditional ICT infrastructure, many are looking to emerging technology frameworks such as cloud computing to provide a model for planning solutions to those challenges.

Cloud computing and enterprise architecture frameworks provide guidance and a set of tools to assist organizations in providing structure, and infrastructure needed to accomplish ICT modernization objectives.

Data Center Consolidation and Adopting Cloud Computing in 2013

Throughout 2012 large organizations and governments around the world continued to struggle with the idea of consolidating inefficient data centers, server closets, and individual “rogue” servers scattered around their enterprise or government agencies.  Issues dealt with the cost of operating data centers, disaster management of information technology resources, and of course human factors centered on control, power, or retention of jobs in a rapidly evolving IT industry.

Cloud computing and virtualization continue to have an impact on all consolidation discussions, not only from the standpoint of providing a much better model for managing physical assets, but also in the potential cloud offers to solve disaster recovery shortfalls, improve standardization, and encourage or enable development of service-oriented architectures.

Our involvement in projects ranging from local, state, and national government levels in both the United States and other countries indicates a consistent need for answering the following concerns:

  • Existing IT infrastructure, including both IT and facility, is reaching the end of its operational life
  • Collaboration requirements between internal and external users are expanding quickly, driving an architectural need for interoperability
  • Decision support systems require access to both raw data, and “big data/archival data”

We would like to see an effort within the IT community to move in the following directions:

  1. Real effort at decommissioning and eliminating inefficient data centers
  2. All data and applications should be fit into an enterprise architecture framework – regardless of the size of organization or data
  3. Aggressive development of standards supporting interoperability, portability, and reuse of objects and data

Regardless of the very public failures experienced by cloud service providers over the past year, the reality is cloud computing as an IT architecture and model is gaining traction, and is not likely to go away any time soon.  As with any emerging service or technology, cloud services will continue to develop and mature, reducing the impact and frequency of failures.

Future Data CentersWhy would an organization continue to buy individual high powered workstations, individual software licenses, and device-bound storage when the same application can be delivered to a simple display, or wide variety of displays, with standardized web-enabled cloud (SaaS) applications that store mission critical data images on a secure storage system at a secure site?  Why not facilitate the transition from CAPEX to OPEX, license to subscription, infrastructure to product and service development?

In reality, unless an organization is in the hardware or software development business, there is very little technical justification for building and managing a data center.  This includes secure facilities supporting military or other sensitive sites.

The cost of building and maintaining a data center, compared with either outsourcing into a commercial colocation site – or virtualizing data, applications, and network access requirements has gained the attention of CFOs and CEOs, requiring IT managers to more explicitly justify the cost of building internal infrastructure vs. outsourcing.  This is quickly becoming a very difficult task.

Money spent on a data center infrastructure is lost to the organization.  The cost of labor is high, the cost of energy, space, and maintenance is high.  Mooney that could be better applied to product and service development, customer service capacity, or other revenue and customer-facing activities.

The Bandwidth Factor

The one major limitation the IT community will need to overcome as data center consolidation continues and cloud services become the ‘norm, is bandwidth.  Applications, such as streaming video, unified communications, and data intensive applications will need more bandwidth.  The telecom companies are making progress, having deployed 100gbps backbone capacity in many markets.  However this capacity will need to continue growing quickly to meet the needs of organizations needing to access data and applications stored or hosted within a virtual or cloud computing environment.

Consider a national government’s IT requirements.  If the government, like most, are based within a metro area.  The agencies and departments consolidate their individual data centers and server closets into a central or reduced number of facilities.   Government interoperability frameworks begin to make small steps allowing cross-agency data sharing, and individual users need access to a variety of applications and data sources needed to fulfill their decision support requirements.

For example, a GIS (Geospatial/Geographic Information System) with multiple demographic or other overlays.  Individual users will need to display data that may be drawn from several data sources, through GIS applications, and display a large amount of complex data on individual display screens.  Without broadband access between both the user and application, as well as application and data sources, the result will be a very poor user experience.

Another example is using the capabilities of video conferencing, desktop sharing, and interactive persistent-state application sharing.  Without adequate bandwidth this is simply not possible.

Revisiting the “4th Utility” for 2013

The final vision on the 2013 “wishlist” is that we, as an IT industry, continue to acknowledge the need for developing the 4th Utility.  This is the idea that broadband communications, processing capacity (including SaaS applications), and storage is the right of all citizens.  Much like the first three utilities, roads, water, and electricity, the 4th Utility must be a basic part of all discussions related to national, state, or local infrastructure discussions.  As we move into the next millennium, Internet-enabled, or something like Internet-enabled communications will be an essential part of all our lives.

The 4th Utility requires high capacity fiber optic infrastructure and broadband wireless be delivered to any location within the country which supports a community or individual connected to a community.   We’ll have to [pay a fee to access the utility (same as other utilities), but it is our right and obligation to deliver the utility.

2013 will be a lot of fun for us in the IT industry.  Cloud computing is going to impact everybody – one way or the other.  Individual data centers will continue to close.  Service-oriented architectures, enterprise architecture, process modeling, and design efficiency will drive a lot of innovation.   – We’ll lose some players, gain players, and and we’ll be in a better position at the end of 2013 than today.

Gartner Data Center Conference Looks Into Open Source Clouds and Data Backup

LV-2Day two of the Gartner Data Center Conference in Las Vegas continued reinforcing old topics, appearing at times to be either enlist attendees in contributing to Gartner research, or simply providing conference content directed to promoting conference sponsors.

For example, sessions “To the Point:  When Open Meets Cloud” and “Backup/Recovery: Backing Up the Future” included a series of audience surveys.  Those surveys were apparently the same as presented, in the same sessions, for several years.  Thus the speaker immediately referenced this year’s results vs. results from the same survey questions from the past two years.  This would lead a casual attendee to believe nothing radically new is being presented in the above topics, and the attendees are generally contributing to further trend analysis research that will eventually show up in a commercial Gartner Research Note.

Gartner analyst and speaker on the topic of “When Open Meets Clouds,” Aneel Lakhani, did make a couple useful, if not obvious points in his presentation.

  • We cannot secure complete freedom from vendors, regardless of how much you adopt open source
  • Open source can actually be more expensive than commercial products
  • Interoperability is easy to say, but a heck of a lot more complicated to implement
  • Enterprise users have a very low threshold for “test” environments (sorry DevOps guys)
  • If your organization has the time and staff, test, test, and test a bit more to ensure your open source product will perform as expected or designed

However analyst Dave Russell, speaker on the topic of “Backup/Recovery” was a bit more cut and paste in his approach.  Lots of questions to match against last year’s conference, and a strong emphasis on using tape as a continuing, if not growing media for disaster recovery.

Problem with this presentation was the discussion centered on backing up data – very little on business continuity.  In fact, in one slide he referenced a recovery point objective (RPO) of one day for backups.   What organization operating in a global market, in Internet time, can possibly design for a one day RPO?

In addition, there was no discussion on the need for compatible hardware in a disaster recovery site that would allow immediate or rapid restart of applications.  Having data on tape is fine.  Having mainframe archival data is fine.  But without a business continuity capability, it is likely any organization will suffer significant damage in their ability to function in their marketplace.  Very few organizations today can absorb an extended global presence outage or marketplace outage.

The conference continues until Thursday and we will look for more, positive approaches, to data center and cloud computing.

Gartner Data Center Conference Yields Few Surprises

Gartner’s 2012 Data Center Conference in Las Vegas is noted for  not yielding any major surprise.  While having an uncanny number of attendees (*the stats are not available, however it is clear they are having a very good conference), most of the sessions appear to be simply reaffirming what everybody really knows already, serving to reinforce the reality data center consolidation, cloud computing, big data, and the move to an interoperable framework will be part of everybody’s life within a few years.

Childs at Gartner ConferenceGartner analyst Ray Paquet started the morning by drawing a line at the real value of server hardware in cloud computing.  Paquet stressed that cloud adopters should avoid integrated hardware solutions based on blade servers, which carry a high margin, and focus their CAPEX on cheaper “skinless” servers.  Paquet emphasized that integrated solutions are a “waste of money.”

Cameron Haight, another Gartner analyst, fired a volley at the process and framework world, with a comparison of the value DevOps brings versus ITIL.  Describing ITIL as a cumbersome burden to organizational agility, DevOps is a culture-changer that allows small groups to quickly respond to challenges.  Haight emphasized the frequently stressful relationship between development organizations and operations organizations, where operations demands stability and quality, and development needs freedom to move projects forward, sometimes without the comfort of baking code to the standards preferred by operations – and required by frameworks such as ITIL.

Haight’s most direct slide described De Ops as being “ITIL minus CRAP.”  Of course most of his supporting slides for moving to DevOps looked eerily like an ITIL process….

Other sessions attended (by the author) included “Shaping Private Clouds,” a WIPRO product demonstration, and a data center introduction by Raging Wire.  All valuable introductions for those who are considering making a major change in their internal IT deployments, but nothing cutting edge or radical.

The Raging Wire data center discussion did raise some questions on the overall vulnerability of large box data centers.  While it is certainly possible to build a data center up to any standard needed to fulfill a specific need, the large data center clusters in locations such as Northern Virginia are beginning to appear very vulnerable to either natural, human, or equipment failure disruptions.  In addition to fulfilling data center tier classification models as presented by the Uptime Institute, it is clear we are producing critical national infrastructure which if disrupted could cause significant damage to the US economy or even social order.

Eventually, much like the communications infrastructure in the US, data centers will need to come under the observation or review of a national agency such as Homeland Security.  While nobody wants a government officer in the data center, protection of national infrastructure is a consideration we probably will not be able to avoid for long.

Raging Wire also noted that some colocation customers, particularly social media companies, are hitting up to 8kW per cabinet.  Also scary if true, and in extended deployments.  This could result in serious operational problems if cooling systems were disrupted, as the heat generated in those cabinets will quickly become extreme.  Would also be interesting if companies like Raging Wire and other colocation companies considered developing a real time CFD monitor for their data center floors allowing better monitoring and predictability than simple zone monitoring solutions.

The best presentation of the day came at the end, “Big Data is Coming to Your Data Center.”  Gartner’s Sheila Childs brought color and enthusiasm to a topic many consider, well, boring.  Childs was able to bring the value, power, and future of big data into a human consumable format that kept the audience in their seats until the end of session at 6 p.m. in the late afternoon.

Childs hit on concepts such as “dark data” within organizations, the value of big data in decision support systems (DSS), and the need for developing and recruiting skilled staff who can actually write or build the systems needed to fully exploit the value of big data.  We cannot argue that point, and can only hope our education system is able to focus on producing graduates with the basic skills needed to fulfill that requirement.

%d bloggers like this: