OSS Development for the Modern Data Center

Modern Data Centers are very complex environments.  Data center operators must have visibility into a wide range of integrated data bases, applications, and performance indicators to effectively understand and manage their operations and activities.

While each data center is different, all Data Centers share some common systems and common characteristics, including:

  • Facility inventories
  • Provisioning and customer fulfillment processes
  • Maintenance activities (including computerized maintenance management systems <CMMS>)
  • Monitoring
  • Customer management (including CRM, order management, etc.)
  • Trouble management
  • Customer portals
  • Security Systems (physical access entry/control and logical systems management)
  • Billing and Accounting Systems
  • Service usage records (power, bandwidth, remote hands, etc.)
  • Decision support system and performance management integration
  • Standards for data and applications
  • Staffing and activities-based management
  • Scheduling /calendar
  • etc…

Unfortunately, in many cases, the above systems are either done manually, have no standards, and had no automation or integration interconnecting individual back office components.  This also includes many communication companies and telecommunications carriers which previously either adhered, or claimed to adhere to Bellcore data and operations standards.

In some cases, the lack of integration is due to many mergers and acquisitions of companies which have unique, or non standard back office systems.  The result is difficulty in cross provisioning, billing, integrated customer management systems, and accounting – the day to day operations of a data center.

Modern data centers must have a high level of automation.  In particular, if a data center operator owns multiple facilities, it becomes very difficult to have a common look and feel or high level of integration allowing the company to offer a standardized product to their markets and customers.

Operational support systems or OSS, traditionally have four main components which include:

  • Support for process automation
  • Collection and storage for a wide variety of operational data
  • The use of standardized data structures and applications
  • And supporting technologies

With most commercial or public colocation and Data Centers customers and tenants organizations represent many different industries, products, and services.  Some large colocation centers may have several hundred individual customers.  Other data centers may have larger customers such as cloud service providers, content delivery networks, and other hosting companies.  While single large customers may be few, their internal hosted or virtual customers may also be at the scale of hundreds, or even thousands of individual customers.

To effectively support their customers Data Centers must have comprehensive OSS capabilities.  Given the large number of processes, data sources, and user requirements, the OSS should be designed and developed using a standard architecture and framework which will ensure OSS integration and interoperability.

OSS Components We have conducted numerous Interoperability Readiness surveys with both governments and private sector (commercial) data center operators during the past five years.  In more than 80% of surveys processes such as inventory management have been built within simple spreadsheets.  Provisioning of inventory items was normally a manual process conducted via e-mail or in some cases paper forms.

Provisioning, a manual process, resulted in some cases of double booked or double sold inventory items, as well as inefficient orders for adding additional customer-facing inventory or build out of additional data center space.

The problem often further compounded into additional problems such as missing customer billing cycles, accounting shortfalls, and management or monitoring system errors.

The new data center, including virtual data centers within cloud service providers, must develop better OSS tools and systems to accommodate the rapidly changing need for elasticity and agility in ICT systems.  This includes having as single window for all required items within the OSS.

Preparing an OSS architecture, based on a service-oriented architecture (SOA), should include use of ICT-friendly frameworks and guidance such as TOGAF and/or ITIL to ensure all visions and designs fully acknowledge and embrace the needs of each organization’s business owners and customers, and follow a comprehensive and structured development process to ensure those objectives are delivered.

Use of standard databases, APIs, service busses, security, and establishing a high level of governance to ensure a “standards and interoperability first” policy for all data center IT will allow all systems to communicate, share, reuse, and ultimately provide automated, single source data resources into all data center, management, accounting, and customer activities.

Any manual transfer of data between offices, applications, or systems must be prevented, preferring to integrate inventory, data collections and records, processes, and performance management indicators into a fully integrated and interoperable environment.  A basic rule of thought might be that if a human being has touched data, then the data likely has been either corrupted or its integrity may be brought into question.

Looking ahead to the next generation of data center services, stepping a bit higher up the customer service maturity continuum requires much higher levels of internal process and customer process automation.

Similar to NIST’s definition of cloud computing, stating the essential characteristics of cloud computing include “self-service provisioning,” “rapid elasticity,” ”measured services,” in addition to resource pooling and broadband access, it can be assumed that data center users of the future will need to order and fulfill services such as network interconnections, power, virtual space (or physical space), and other services through self-service, or on-demand ordering.

The OSS must strive to meet the following objectives:

  • Standardization
  • Interoperability
  • Reusable components and APIs
  • Data sharing

To accomplish this will require nearly all above mentioned characteristics of the OSS to have inventories in databases (not spreadsheets), process automation, and standards in data structure, APIs, and application interoperability.

And as the ultimate key success factor, management DSS will finally have potential for development of true dashboard for performance management, data analytics, and additional real-time tools for making effective organizational decisions.

Focusing on Cloud Portability and Interoperability

Cloud Computing has helped us understand both the opportunity, and the need, to decouple physical IT infrastructure from the requirements of business.  In theory cloud computing greatly enhances an organization’s ability to not only decommission inefficient data center resources, but even more importantly eases the process an organization needs to develop when moving to integration and service-orientation within supporting IT systems.

Current cloud computing standards, such as published by the US National Institute of Standards and Technology (NIST) have provided very good definitions, and solid reference architecture for understanding at a high level a vision of cloud computing.

image However these definitions, while good for addressing the vision of cloud computing, are not at a level of detail needed to really understand the potential impact of cloud computing within an existing organization, nor the potential of enabling data and systems resources to meet a need for interoperability of data in a 2020 or 2025 IT world.

The key to interoperability, and subsequent portability, is a clear set of standards.  The Internet emerged as a collaboration of academic, government, and private industry development which bypassed much of the normal technology vendor desire to create a proprietary product or service.  The cloud computing world, while having deep roots in mainframe computing, time-sharing, grid computing, and other web hosting services, was really thrust upon the IT community with little fanfare in the mid-2000s.

While NIST, the Open GRID Forum, OASIS, DMTF, and other organizations have developed some levels of standardization for virtualization and portability, the reality is applications, platforms, and infrastructure are still largely tightly coupled, restricting the ease most developers would need to accelerate higher levels of integration and interconnections of data and applications.

NIST’s Cloud Computing Standards Roadmap (SP 500-291 v2) states:

…the migration to cloud computing should enable various multiple cloud platforms seamless access between and among various cloud services, to optimize the cloud consumer expectations and experience.

Cloud interoperability allows seamless exchange and use of data and services among various cloud infrastructure offerings and to the the data and services exchanged to enable them to operate effectively together.”

Very easy to say, however the reality is, in particular with PaaS and SaaS libraries and services, that few fully interchangeable components exist, and any information sharing is a compromise in flexibility.

The Open Group, in their document “Cloud Computing Portability and Interoperability” simplifies the problem into a single statement:

“The cheaper and easier it is to integrate applications and systems, the closer you are getting to real interoperability.”

The alternative is of course an IT world that is restrained by proprietary interfaces, extending the pitfalls and dangers of vendor lock-in.

What Can We Do?

The first thing is, the cloud consumer world must make a stand and demand vendors produce services and applications based on interoperability and data portability standards.  No IT organization in the current IT maturity continuum should be procuring systems that do not support an open, industry-standard, service-oriented infrastructure, platform, and applications reference model (Open Group).

In addition to the need for interoperable data and services, the concept of portability is essential to developing, operating, and maintaining effective disaster management and continuity of operations procedures.  No IT infrastructure, platform, or application should be considered which does not allow and embrace portability.  This includes NIST’s guidance stating:

“Cloud portability allows two or more kinds of cloud infrastructures to seamlessly use data and services from one cloud system and be used for other cloud systems.”

The bottom line for all CIOs, CTOs, and IT managers – accept the need for service-orientation within all existing or planned IT services and systems.  Embrace Service-Oriented Architectures, Enterprise Architecture, and at all costs the potential for vendor lock-in when considering any level of infrastructure or service.

Standards are the key to portability and interoperability, and IT organizations have the power to continue forcing adoption and compliance with standards by all vendors.  Do not accept anything which does not fully support the need for data interoperability.

%d bloggers like this: