A Cloudy Future for Networks and Data Centers in 2010

The message from the VC community is clear – “don’t waste our seed money on network and server equipment.” The message from the US Government CIO was clear – the US Government will consolidate data centers and start moving towards cloud computing. The message from the software and hardware vendors is clear – there is an enormous Data Center within a Data Center Cloudinvestment in cloud computing technologies and services.

If nothing else, the economic woes of the past two years have taught us we need to be a lot smarter on how we allocate limited CAPEX and OPEX budgets. Whether we choose to implement our IT architecture in a public cloud, enterprise cloud, or not at all – we still must consider the alternatives. Those alternatives must include careful consideration of cloud computing.

Cloud 101 teaches us that virtualization efficiently uses compute and storage resources in the enterprise. Cloud 201 teaches us that content networks facing the Internet can make use of on-demand compute and storage capacity in close proximity to networks. Cloud 301 tells us that a distributed cloud gives great flexibility to both enterprise and Internet-facing content. The lesson plan for Cloud 401 is still being drafted.

Data Center 2010

Data center operators traditionally sell space based on cabinets, partial cabinets, cages, private suites, and in the case of carrier hotels, space in the main distribution frame. In the old days revenue was based on space and cross connects, today it is based on power consumed by equipment.

If the intent of data center consolidation is to relieve the enterprise or content provider of unnecessary CAPEX and OPEX burden, then the data center sales teams should be gearing up for a feeding frenzy of opportunity. Every public cloud service provider from Amazon down to the smallest cloud startup will be looking for quality data center space, preferably close to network interconnection points.

In fact, in the long run, if the vision of cloud computing and virtualization is true, then the existing model of data center should be seen as a three-dimensional set of objects within a resource grid, not entirely dissimilar to the idea set forth by Nicholas Carr in his book the “Big Switch.”

Facilities will return to their roots of concrete, power, and air-conditioning, adding cloud resources (or attracting cloud service providers to provide those resources), and the cabinets, cages, and private suites will start being dismantled to allow better use of electrical and cooling resources within the data center.

Rethinking the Data Center

Looking at 3tera‘s AppLogic utility it brings a strange vision to mind. If I can build a router, switch, server, and firewall into my profile via a drag and drop utility, then why would I want to consider buying my own hardware?

If storage becomes part of the layer 2 switch, then why would I consider installing my own SAN, NAS, or fiber channel infrastructure? Why not find a cloud service provider with adequate resources to run my business within their infrastructure, particularly if their network proximity and capacity is adequate to meet any traffic requirement my business demands?

In this case, if the technology behind AppLogic and other similar Platform as a Service (PaaS) is true to the marketing hype, then we can start throwing value back to the application. The network, connectivity, and the compute/storage resource becomes an assumed commodity – much like the freeway system, water, or the electrical grid.

Flowing the Profile to the User

Us old guys used to watch a SciFi sitcom called “Max Headroom.” Max Headroom was a fictional character who lived within the “Ether,” being able to move around though computers, electrical grids – and pop up wherever in the network he desired. Max could also absorb any of the information within computer systems or other electronic intelligence sources, andFrom the old SciFi series Max Headroom deliver his findings to news reporters who played the role of investigative journalists.

We are entering an electronic generation not too different from the world of Max Headroom. If we use social networking, or public utility applications such as Hotmail, Gmail, or Yahoo Mail, our profile flows to the network point closest to our last request for application access. There may be a permanent image of our data stored in a mother ship, but the most active part of our profile is parsed to a correlation database near our access point.

Thus, if I am a Gmail user, and live in Los Angeles, my correlated profile is available at the Google data cache with correlated Gmail someplace with proximity to Los Angeles. If I travel to HongKong, then Gmail thinks “Hmmm…, he is in HK, and we should parse his Gmail image to our HK cache, and hope he gets the best possible performance out of the Gmail product from that point.”

I, as the user, do not care which data center my Gmail profile is cached at, I only care that my end user experience is good and I can get my work done without unnecessary pain.

The data center becomes virtual. The application flows to the location needed to do the job and make me happy. XYZ.Com, who does my mail day-to-day, must understand their product will become less relevant and ineffective if their performance on a global scale does not meet international standards. Those standards are being set by companies who are using cloud computing on a global, distributed model, to do the job.

2010 is the Year Data Centers Evolve to Support the Cloud

The day of a 100sqft data center cage is rapidly becoming as senseless as buying a used DMS250. The cost in hardware, software, peopleware, and the operational expense of running a small data center presence simply does not make sense. Nearly everything that can be done in a 100sqft cage can be done in a cloud, forcing the services provider to concentrate on delivering end user value, and leaving the compute, storage, and network access to utility providers.

And when the 100sqft cage is absorbed into a more efficient resource, the cost – both in electrical/mechanical and cost (including environmental costs) will drop by a factor of nearly 50%, given the potential for better data center management using strict hot/cold aisle separation, hot or cold aisle containment, containers – all those things data center operators are scrambling to understand and implement.

Argue the point, but by the end of 2010, the ugly data center caterpillar will come out of its cocoon as a better, stronger, and very cloudy utility for the information technology and interconnected world to exploit.

Questions Data Center Operators Don’t Want You to Ask

We live in a world of clouds, SaaS, outsourcing, and Everything over IP (EoIP). The challenges IT professionals face when trying to sort through the maze of technology, globalization, SOX, HIPPA, PUE, and on,… result in daunting confusion. Mix in a few Your Future Data centeroverzealous sales people, an inquiring CFO, incorrigible users within the organization, and you have all the pre-requisites for a world class, globalized, migraine headache.

Now let’s go out and consider throwing all this confusion into an outsourced data center. You know your company wants to save money, have better quality facilities, be close to network and Internet exchange points, be close to carriers who can support your national distributed office. So you do what anybody might consider doing – you call on a data center sales person.

Each company has a pitch. That pitch is refined based on what resources the company has to sell, and the thought leadership provided by the data center operator will most certainly promote their “unique” product or service. As the overzealous sales person goes into their pitch, several topics will no doubt emerge:

  • Their power stability
  • Mechanical and Electrical Systems (including maintenance)
  • Their remote hands, smart hands, on-site tech support, and “nutty” devotion to service
  • Completion of SAS70 audits
  • Facility structure
  • Security
  • And so on…

This article will walk through a few topics that are normally not well explained by data center operators, avoided, or simply misrepresented.

The Data Center Compromise, Mixed-Use Buildings

Any data center presents the potential tenant with a series of compromises. Very few commercial data centers are custom-built from the ground up, and most data centers are either built into mixed-use properties (those properties originally built as office space), and conversions (those properties built for another reason, such as a retail outlet <we built a large data center in a former WalMart property in Seoul a few years ago>, a warehouse <such as the original Equinix/Pihana site in Tokyo>, or factory <such as the original Level 3 gateway in Brussels>).

Data center operators choose mixed-use building primarily when they are in an attractive location, such as near a carrier hotel, major fiber optic terminal, or in a strategic central business district location. Mixed-use buildings are normally built for limited floor loading (how much weight you can actually place on a slab of concrete, where you can place the weight (such as over a structure beam), and with lower floor to ceiling separation (in the US, this is normally around 12.5 ft).

In addition, mixed-use buildings may have one or more of the following shortfalls:

  • Limited access to utility power
  • Limited “riser” space within the building (for telecom, power, and cooling infrastructure needing to transit the building from basement/ground level or from the rooftop)
  • Antiquated power distribution within the building (such as old buss ducts, switch gear, panels, etc)
  • Limited cooling capacity
  • Limited ability to either power or cool tenants with higher “watts/sqft” requirements (server farms)

Mixed-use buildings are best used by tenants with the following profile:

  • Telecom, routing, and switching carriers/networks
  • Members/participants in a carrier hotel meet-me-room
  • Tenants with limited requirement to support large server installations

While the mixed-use building may have the most technical limitations, they also tend to be the most expensive space. This is primarily due to the lower cost of telecom carrier and network interconnections, limited need for interconnection backhaul (if the property has an open meet-me-room or distribution frame), and in most cases simply legacy network effect. The Newby-ism “if you are a network, and not present in a carrier hotel, then you are paying somebody to be present in a carrier hotel” is still valid (Hunter Newby, CEO, Allied Fiber).

For those who are considering outsourcing into a mixed-use building, make sure you understand your requirement for long term growth, the power, cooling, structural, and telecom restrictions, and safety record of the building. MOST major electrical failures and events which have occurred in the data center industry over the past ten years have been in mixed-use buildings. Find out if your building has had failures, and if so, a very detailed accounting of how the data center owner has corrected the infrastructure problems which caused the problem.

Do not accept explanations that it (the failure) was human error. While probable many electrical failures in mixed-use buildings are caused by sloppy maintenance, the age of infrastructure should be considered more of a concern. To understand the infrastructure in a building, ask the data center operator to produce a recent, stamped (by certified electrical engineer), single line diagram showing not only the infrastructure, but also age of infrastructure. Only those with something to hide will refuse the request. Stay away from them…

Bring a qualified consultant with you to the sales meeting, and understand the burden is on the data center operator to answer your questions.

Conversion Buildings

In many cases the conversion building will meet all requirements for building out a high quality data center. If the conversion building is considered a shell, meeting all structural requirements such as near unlimited floor loading, high floor to ceiling clearance, very large floor plates (greater than 40,000sqft per plate), adequate for high capacity cooling systems (prefer chilled water), generator backup, fuel storage, and good proximity to multiple facility-based telecom carriers, then you can do a lot of good things with a conversation.

Things to keep in mind with conversions:

  • They are often built outside of the city center, limiting high concentrations of facility-based fiber and carrier diversity
  • They are often located in areas sensitive to natural disasters such as flooding
  • They are often located in industrial areas, presenting both physical security challenges to the property (vandalism), as well as physical danger to people who need 24×7 access to their equipment (assault)

With the conversion, just as with the mixed-use building, you will need to ensure you fully understand the electrical and mechanical source and distribution. You need to know the age of equipment, that existing single line diagrams are accurate and certified, as well as ensure the facility has infrastructure laid out for future growth – and the local utilities can support growth (will the power utility provide more power? Will the city allow additional generators and fuel storage?).

The conversion is often a very good choice for server farms, and large deployments. The cost of space is normally cheaper, power may be cheaper, and floor loading is normally not an issue. Many satellite data center cluster are popping up in locations such as El Segundo near Los Angeles, offering very high quality data center space developed from conversions.

Site Commissioning, SAS 70, and CMMS

We covered this pretty well in a previous article, and will not go into complete detail here. However the main theme cannot be avoided:

No company should consider collocation within a facility that cannot produce complete documentation that integration testing and commissioning was completed prior to facility operations – and that testing should be at NETA Level 5. In some cases, documentation of “retro” testing is acceptable, however potential tenants in a facility should be aware that is still a compromise, as it is almost impossible to complete a retro-commissioning test in a live facility.

Disaster ResponseThis is most critical in a mixed-use use building, where there have been numerous electrical failures due to lack of any commissioning, limited commissioning, or major infrastructure upgrades without any significant level of integration testing. The candidate data center should provide all historical information on the electric al system, as well as commissioning documentation – on demand, for the prospective tenant. Reticence or reluctance to provide the documentation probably indicates a major problem.

Understanding SAS70 Audits

One thing to keep in mind about SAS70 audits… The audit only reviews items the data center operator chooses to audit. Thus, a company may have a very nice and polished SAS70 audit documentation, however the contents may not include every item you need to ensure the data center operator has a comprehensive operations plan. You may consider finding an experienced consultant to review the SAS70 document, and provide any additional guidance on whether or not the audit actually includes all facility maintenance and management items needed to ensure continuing protection from mechanical, monitoring/management, electrical, security, or human staffing failures.

Comprehensive SAS70 audits will go into a fair level of detail. If your candidate data center offers a SAS70 audit of 5~10 pages, then you might find it lacking the level of detail needed to give you confidence your mission-critical equipment and applications are being facility-managed in data center that really “walks the talk.”

The SAS70 audit should include all the following sections:

Security

  • Security Company profile
  • Key inventories
  • Access management
  • Badges
  • Biometrics
  • Staff selection criteria
  • Materials control
  • Confirmation each security guard has completed a background check
  • Security equipment is routinely inspected/tested
  • Security “rounds” are recorded and confirmed
  • Security camera images and access logs are kept for a minimum 60 days, longer is preferred

Maintenance/CMMS (Computerized Maintenance Management System)

  • Comprehensive preventive maintenance/testing schedule for ALL mechanical and electrical equipment
  • UPS
  • Emergency generators
  • Rectifiers/DC Plant
  • ATS
  • Switchgear
  • Complete semi-annual (or more frequent) infrared scan
  • Breaker audit for NEC compliance (or automated view via current transformers)
  • Service level agreements
  • Emergency call out for all critical M&E equipment
  • Diesel refueling during emergencies or extended operation

Human Resources

  • Staffing process
  • Background checks
  • Certifications
  • Termination management

NOTE: While all of us have examples and stories of people who became super routing engineers, electrical staff, and field ops professionals, having a high number of network, cabling (BICSI), or electrical certifications does give you a level of confidence that the data center company knowledge and experience level is capable of performing at the desired or marketed service level.

Operations

  • Recurring training
  • Recurring staff meetings
  • Business continuity and disaster recovery plans
  • Daily site verifications
  • Escalation process

Again, the more detailed an audit, the greater your confidence the data center is being managed and operated to the level you can confidently bring your business into their environment for outsourcing.

The SAS70 Type 1 audit is a paper audit, and the Type 2 audit actually includes measurement and compliance of each control or observation.

Final Recommendation

The bottom line is each that your business, whether it is in a cabinet, a 1000ft cage, or a private suite, depends on the data center operator for supporting mission-critical applications and function essential to your business. If you do not believe you have the knowledge, or ability to drive a hard factual line of due-diligence in your data center search, find a consultant who can provide that guidance and ensure you are getting exactly what you are paying to receive.

If the data center operator is reluctant to support your requests for audit or compliance, then the chances are that data center operator is either treating your company with a high level of contempt, they have problems which may make a potential tenant reluctant to use that facility, or even worse, they simply do not have the needed documentation.

John Savageau, Long Beach

Selecting Your Data Center Part 3 – Understanding Facility Clusters

Now that we have determined the best geographic location for our data center, it is time to evaluate local facility options. The business concept of Splicing Fiber Optic Cableindustry clustering is valid in the data center industry. In most locations supporting carrier hotels and Internet Exchange Points you will normally see a large number of data centers within a very close proximity, offering a variety of options, and a maze of confusing pitches from aggressive sales people.

The idea of industry clustering says that whenever a certain industry, such as an automobile manufacturer selects a location to build a factory or assembly plant, others in the industry will eventually locate nearby. This is due to a number of factors including the availability of skilled workers within that industry, favorable city support for zoning, access to utilities, and proximity to supporting infrastructure such as ocean ports, rail, population centers, and communications.

The data center industry has evolved in a similar model. When you look at locations supporting large carrier hotels, such as Los Angeles, Seattle, San Francisco, London, and New York, you will also see there are many options for data centers in the local area. For example in Los Angeles, the One Wilshire Building is a large carrier hotel with collocation space within the building, however there are at many options within a very close proximity to One Wilshire, such as Carrier Center (600 W. 7th), 818 W.7th St., the Garland Building, 530 W. 6th, the Quinby Building, and several others.

The bay area has similar clusters stretching between Palo Alto and San Jose, and Northern Virginia (Ashburn, Reston, Herndon, Sterling, Vienna) has a high density of facilities in proximity to the large Equinix Exchange Point in Ashburn.

When you have data center clusters, you will also find each facility is either fully meshed with commercial dark fiber interconnecting the buildings, or has several options of network providers offering competitive “lit” services between buildings. 

Note the attached picture of downtown Los Angeles, showing all the major colocation facilities and physical interconnection between the facilties with high capacity fiber (Wilshire Connection).

Discriminating Features Among Data Centers

The Uptime Institute, founded in 1993 (and recently acquired by the 451 Group) has long been a thought leader in codifying and classifying data center infrastructure and quality standards. While many may argue the Uptime Institute is focused on enterprise data center modeling, the same standards set by the Uptime Institute are a convenient metric to use when negotiating data center space in a commercial or public data center.

As mentioned in Part one of this series, there are four major components to the data center:

  • Concrete (space for cabinets, cages, and suites)
  • Power
  • Air-conditioning
  • Access to telecom and connectivity

Each data center in the cluster will offer all the above, at some level of quality scale that differs from others in the cluster. This article will focus on facility considerations. We will look at the Uptime Institute’s “tiered” system of data center classification in a later post.

Wilshire Connection Los AngelesConcrete. Data centers and carrier hotels supporting major interconnection points or industry cluster “hubs” will generally draw higher prices for their space. The carrier hotel will draw the highest prices, as the value of being colocated with the telecom hub brings more value to either space within the meet-me-room, or adjacent space within the same building. Space within the carrier hotel facility is also normally limited (there are exceptions, such as the NAP of the Americas in Miami), restricting individual tenants to a few cabinets or small cages.

The attraction of being in or near the carrier hotel meet-me-room is not necessarily in the high cost cabinet or cage, it is the availability of multiple carriers and networks available normally with a simple cross connect or jumper cable, rather than forcing networks and content providers to purchase/lease expensive backhaul to allow interconnection with other carriers or networks collocated in a different facility.

Meet-me-rooms at the NAP of the Americas, 60 Hudson, the Westin Building, and One Wilshire in the US, and Telehouse in London offer meet-me-room interconnections with several hundred potential interconnection partners or carrier within the same main distribution frame. Thus the expensive meet-me-room cabinets and cages make up their value through access to other carriers with inexpensive cross connects.

NOTE: One thing to keep in mind about carrier hotels and meet-me-rooms; most of the buildings supporting these facilities were not designed as data centers, they are office conversions. Thus the electrical systems, air-conditioning systems, floor loading, and security infrastructure are not as robust as you might find in a nearby facility constructed as a data center or telecom central office.

Facilities near the carrier hotel will generally have slightly lower cost space. As industry concerns over security within the carrier hotel increase, and the presence and quality of adjacent buildings exceeds that of the carrier hotel, many companies are reconsidering their need to locate within the legacy carrier hotel. In addition, many nearby collocation centers and data centers are building alternative meet-me-rooms and distribution frames within their building to accommodate both their own tenants, as well as offering the local community a backup or alternative interconnection point to the legacy carrier hotel.

This includes the development of alternative and competitive Internet Exchange Points.

This new age of competitive or alternate meet-me-rooms, multiple Internet Exchange Points, and data center industry clusters gives the industry more flexibility in their facility selection. In the past, Hunter Newby of Allied Fiber claimed “if you are not present in a facility such as 60 Hudson or the Westin Building, you are paying somebody else to be in the building.” This has gradually changed, as in cities such as New York a company can get near identical interconnection or peering support at 111 W. 8th St or 32 Ave of the Americas as available within 60 Hudson.

As the clusters continue to develop, and interconnections between tenants within the buildings become easier, then the requirement to physically locate within the carrier hotel becomes less acute. If you are in Carrier Center in Los Angeles, the cost and difficulty to complete a cross-connection with a tenant within One Wilshire has become almost the same as if you were a tenant within the One Wilshire Building. Ditto for other facilities within the industry cluster. In fact, the entire metro areas of New York, the bay area in Northern California, Northern Virginia, and Los Angeles have all become virtual extensions of the original meet-me-room in the legacy carrier hotel.

The Discriminating Factor

Now as potential data center tenants, we have a somewhat level playing field of data center operators to choose from. This has eliminated much of the interconnection part of our equation, and allows us to drill into each facility based on our requirements for:

  1. Cost/budget
  2. Available services
  3. Space for expansion or future growth
  4. Quality of power and air conditioning

Part four of this series will focus on cost.

As always, your experiences and comments are welcome

John Savageau, Long Beach

Prior articles in this series:

Wilshire Connection photo courtesy of Eric Bender at www.wilshireconnection.com

Selecting Your Data Center Part 2 – Geography and Location

Data center selection is an exercise in compromise. Everybody would like to have the best of all worlds, with a highly connected facility offering 24×7 smart Selecting the Data Center Locationhands support, impenetrable security, protection from all natural and man-made disasters, in addition to service level agreements offering 5-Nines power availability at $.03/kW. Not likely we will be able to hit all those desired features in any single facility.

Data center operators price their facilities and colocation based on several factors:

  • Cost of real estate in their market
  • Cost of power and utilities in their market
  • Competition in their market
  • Level of service offered (including power, interconnections, etc)
  • Quality of facility (security, power density, infrastructure, etc)

Networks, Content Providers, Enterprises, and Eyeballs

The basic idea of an Internet-enabled world is that eyeballs (human beings) need to access content, content needs access to eyeballs, eyeballs and content need access to networks (yes, eyeballs do need to communicate directly with other eyeballs), and networks need access to content and eyeballs. Take one of the above out of the equation, and the Internet is less effective. We can also logically add applications to the above model, as applications are now communicating directly with applications, allowing us to swap eyeballs for apps to complete the high level model.

Organizations using the Internet fall into a category of either a person, an application (including enterprise, content, and entertainment applications), or a network (including access, regional, and global networks).

Each potential organization considering outsourcing some or all of their operations into a data center needs to ask themselves a few basic questions:

  1. Is the organization heavily dependent on massive storage requirements?
  2. Is the organization highly transaction-oriented? (such as a high volume eCommerce site)
  3. Is the organization a content delivery network/CDN, requiring high bandwidth access to eyeballs?
  4. Are your target applications or eyeballs local, regional, global?
  5. Is the company a network service provider highly dependent on network interconnections?

Storage and servers = high density power requirements. The more servers, the higher the operational expenses on both space and power. This would logically drive a potential collocation customer to a location with the cheapest power – Data Center Elementshowever that might be a location outside of central business districts, and possibly outside of an area well connected with domestic and international telecom carriers, network service providers, and access networks (including the cable TV networks serving individual subscribers).

Thus the cost of power and real estate might be favorable if you are located in Iowa, however bringing your content to the rest of the world may limit you to one or two network providers, which with limited competition will likely raise the price of bandwidth.

Locating your business in a city center such as New York or Los Angeles will give you great access to bandwidth through either a colocated carrier hotel or carrier hotel proximity. However, the cost of real estate and power in the city center will be a multiple of that you may find in areas like Oregon or Washington State.

In a perfect telecom world, all networks and customers would have access to dark fiber from facility-based carriers serving the location they are either located or doing business. Allied Fiber’s Hunter Newby believes that facility-based carriers should be in the business of providing the basic “interstate highway” of communications capacity, allowing any company who can afford the cost to acquire high capacity interconnections to bring their operation closer to the interconnection points.

If you follow the carrier world you will know that at least in the United States, carriers are reluctant to sell dark fiber resources, preferring to multiplex their fiber into “lit” circuits managed and provisioned by the carrier. Clearly that provides a lot more potential revenue than selling “wholesale” infrastructure. Also makes it a lot more expensive for a company considering collocation to locate their facility in a geography separated from the major interconnection sites.

The Business Case and Evaluation

Again, selecting your desired location or locations to outsource your business is a compromise. In the United States Virginia is a good location for power, and an expensive location for interconnecting and collocating. Los Angeles is among the lowest cost areas for interconnections, mid way up the power scale, but more expensive for space.

Consider the possibility of moving to a great location in Idaho, with low cost power, and low cost real estate. You build a 500,000sqft facility, with more than 300 watts/sqft power capability. Your first project supports more than 20,000 servers delivering Internet streaming media content. Your facility costs are low, but your network costs become very high. You cannot buy dark fiber from a facility-based carrier, and the cost of leasing 10G wavelengths is nearly $10,000/month per wavelength. You probably have 500GB of data to push into the Internet. Is the power cost vs. connectivity and bandwidth compromise in your favor?

Here is another exercise. Let’s say for argument, in a Los Angeles carrier hotel static costs may run:

  1. $1000/month for a cabinet in the carrier hotel, $500/month for a cabinet in nearby facility.
  2. $12/breakered amp (breakered amps are still the norm, moving to usage-based models)
  3. $200/month for a cross connection within the carrier hotel building
  4. $1000/month for a fiber cross connect to a nearby or adjacent building
  5. $1000/month for an Internet Exchange Point/IXP connection (if you are a network service provider)

NOTE: Los Angeles has several large carrier hotels in the downtown area, as does New York, with buildings such as 60 Hudson and 111 W. 8th offering potential tenants multiple options. Other cities such as Seattle, Miami, and Chicago have more limited options, with a single dominant carrier hotel.

If you are a medium sized network service provider, you may consider getting a couple cabinets in a nearby facility and acquire a couple fiber cross connections to one or more nearby carrier hotels. Get a cabinet within the carrier hotel, add high capacity switching or routing equipment in the cabinet, and then try to maximize the number of local cross connects with other networks and content providers, and connect to a local Internet Exchange Point for additional peering flexibility.

Then take your same requirement for both cabinet space and interconnections, and try the evaluation in several different cities and markets. Fit the cost into one of the above squares in the Data Center Basic Elements chart, and determine the cost for each component.

If your business requirement is more dependent on space, and that is the highest potential operational expense, then you need to consider which location will minimize cost increases in the other three quadrants while you evaluate the best location for meeting your space budget. If your requirement spans several different geographies, add the cost of interconnection between locations to your interconnection costs. Does the location give you adequate access to the target applications or eyeballs?

If you find that a location in Omaha, Nebraska, meets all your requirements, but your target audience also includes a high percentage in India or China, then the cost of getting to your eyeballs in both OPEX and performance may make the Nebraska site untenable – even though it meets your high level budget.

Enter the Cloud

Nearly all businesses and organizations now have an additional alternative. The virtualized commercial cloud service provider. Virtualization products have come a long way over the past couple years, and are maturing very quickly. CSPs such as Google, Amazon, Rackspace, and Layered technologies are providing very powerful applications support for small and medium business, and have become a very visible debate at the national level as governments and large corporations deal with questions of:

  • Focusing on their core competencies, rather than internal IT organizations
  • Building more efficiency into the IT infrastructure (heavy on energy efficiency)
  • Recovering space used by IT and computer rooms
  • Reducing OPEX spent on large IT support staff
  • Better technologies such as netboooks
  • And more…

Thus the physical data center now has competition from an unlikely source – the cloud. All new IT and content-related projects should consider cloud computing or software as a service (SaaS) models as a potential alternative to bricks and mortar data center space.

Many venture capital companies are now requiring their potential investments to consider a hosted or SaaS solution to outsource their office automation, web presence, and eCommerce applications. This is easily done through a commercial web service or cloud hosting company, with the additional option of on-demand or elastic expansion of their hosting resources. This may be the biggest potential competitor to the traditional data center. The venture community simply does not want to get stuck with stranded equipment or collocation contracts if their investment fails.

Disaster Recovery and Business Continuity

One final note on selecting your location for outsourcing. Most companies need some level of geographic diversity to fulfill a business need for offsite disaster recovery apps and storage, load balancing, proximity (to eyeballs and applications), and interconnections. Thus your planning should include some level of geographic diversity, including the cost of interconnecting facilities to mirror, parse, or back up files. The same rules apply, except that in the case of backup the urgency for high density interconnections is lower than the primary operating location.

This does raise the potential of using facilities in remote locations, or locations offering low cost collocation and power pricing for backups.

Links to Data Center Resources

Here are a couple links to magazines and eZines supporting the data center industry.

Part 3 will explore the topic of understanding the hidden world of data center tiers, mechanical and electrical infrastructure, and site structure.

John Savageau, Long Beach

Prior articles in this series:

28Oct09

Selecting Your Data Center Part 1 – Understanding the Market

Selecting Your Data Center Part 1 – Understanding the Market

The data center industry continues to evolve with mergers, acquisitions, and a healthy crop of emerging companies. New data center products and services Old Data Centerare hitting the street, an aggressive debate on the model of selling space vs. power, and alternatives to physical data center space in the cloud are giving us a confusing maze of alternatives to meet our outsourcing needs.

The data center market is not unique. For example, in Southern California we have a wide variety of supermarkets and grocery stores including VONs, Ralphs, Albertsons, Jons, Trader Joes, Whole Foods, and lots of others. All grocery stores basically sell the same kinds of products, with very few exceptions.

What makes you go to VONs, rather than Whole Foods? Is it location? Prices? Image? A social issue?

The data center industry is not significantly different. In a city such as Los Angeles you have Equinix, Switch and Data, Savvis, BT Infonet, CoreSite, US Colo, Digital Realty, Level 3 – just to name a few. What makes one facility more attractive than another to fulfill your collocation needs?

Data centers, at the most common denominator, have traditionally offered:

  • Concrete (space for cabinets, racks, cages, suites, etc)
  • Power
  • Air conditioning
  • Interconnections

If all data centers offer the basic components listed above, then what discriminates the data centers from one another?

Now we can add additional alternatives to the basic data center model – the public cloud services provider/CSP and Software as a Service/SaaS.

As a potential data center tenant (this includes “virtual” data center tenants living in a CSP infrastructure) we have to evaluate all the above components, and determine which collocation or data center provider will best meet our facility, budget, and connectivity needs.

The Sense of Urgency

The CIO of the United States, Vivek Kundra, recently pressed the case for data center consolidation within the US government, as well as offering a strong recommendation that the US data industry strongly consider moving their operations into either consolidated data centers or virtualize within a cloud provider.

It is clear that data centers used by small and medium companies, as well as most content delivery companies, find better efficiencies in bringing their eCommerce and Internet-facing parts of their business into the data center, and locally interconnect with the Internet service provider community.

The cost of building a data center, providing staffing to manage the data center, and ensuring the efficiency of power and cooling usage is beyond the core competence of most companies. The need for disaster recovery plans, offsite storage, and other business continuity planning are just a few of the long list of items we need to consider as part of an overall information technology/IT or general business plan.

The potential waste of operational expenses, capital budgets, and resulting market “opportunity cost” justifies all companies at least consider outsourcing all or some of their IT operations – particularly as data center and CSPs increase their capabilities.

With the availability of netbooks, online applications (SaaS), and server-based office automation products, all companies should put this on their annual review list. Even the Los Angeles Police Department (LAPD) recently announced their decision to outsource the email to Google. This model does not appear to be going away anytime soon.

The “Selecting Your Data Center” Series

This series will walk through the process of identifying the need for outsourcing, identifying the best location for your data center, discriminating between the alternatives, and finally getting to your decision.

We welcome all comments, experiences, and discussions related to the data center community that would provide productive feedback for a potential data center or CSP tenant.

John Savageau, Long Beach

Deleting Your Hard Drives – Entering a Green Data Center Future of SSDs

For those of us old-timers who muscled 9-track tapes on 10 ft tall on Burroughs B-3500 mainframe computers tape drives, with a total storage capacity of about 5 kilobytes, the idea of sticking a 64 gigabyte SD memory chip into my laptop computer is pretty cosmic.

Disk DriveTerms like PCAM (punch card adding machines) are no longer part of the taxonomy of information technology, nor would any young person in the industry comprehend the idea of a disk platter or disk pack.

Skipping a bit ahead, we find a time when you could purchase an IBM “XT” computer with an integrated 10 megabyte hard drive. No more reliance on 5.25″ or later 3.5″ floppy disks. Hard drives evolved to the point “Fryes” will pitch you a USB or home network 1 terabyte drive for about $100.

Enter the SSD

October 2009 brings us to the point hard drives are now becoming a compromise solution. The SSD (Solid State Disk) has jumped on the data center stage. With MySpace’s announcement they are replacing all 1770 of their existing disk drive-based server systems with higher capacity SSDs, and quoted that SSDs use only 1% of the power required by disk drives, data center rules are set to change again.

SSDs are efficient. If you read press releases and marketing material supporting SSD sales you will hear numbers like:

  • “…single-server performance levels with 1.5GB/sec. throughput and almost 200,000 IOPS
  • … a 320GB ioDrive can fill a 10Gbit/sec. Ethernet pipe
  • … four ioDrive Duos in a single server can scale linearly, which provides up to 6GB/sec. of read bandwidth and more than 500,000 read IOPS (Fusion.io)

This means not only are you saving power per server, you are also able to pack a multiple of existing storage capacity into the same space as currently possible with traditional disk systems. As clusters of SSDs become possible through additional tech development of parallel systems, we need to mentally get our heads around the concept of a three dimensional storage system, rather than a linear systems used today.

The concept of RAID and tape backup systems may also become obsolete, as SSDs hold their images when primary power is removed.

Now companies like MySpace will be in a really great position to re-negotiate their data center and colocation deals, as their actual energy and space requirements will potentially be a fraction of existing installations. Even considering their growth potential, the reduction in actual power and space will no doubt give them more leverage to use in the data center agreements.

Why? Data center operators are now planning their unit costs and revenues based on power sales and consumption. If a company like MySpace is able to reduce their power draw by 30% or more, this represents a potentially huge opportunity cost to the data center in space and power sales. Advantage goes to the tenant.

The Economics of SSDs

Today, the cost of SSDs is slightly higher than traditional disk systems. Even with fiber channel or Infiniband supporting large disk (SAN or NAS) installations. According to Yahoo Tech the cost of an SSD is about 4 times that of a traditional disk. However they also indicate that cost is quickly dropping, and we will probably see near parity within the next 3~4 years.

Now, if we remember the claim MySpace made that with the SSD migration they will consume only 1% of the power used by traditional disk (that is only the disk, not the entire chassis or server enclosure). If you look through a great white paper (actually it is called a “Green Paper”) provided by Fusion.io you will see that implementation of their SSD systems in a large disk farm of 250 servers (components include main memory, 4xnet cache, 4x tier 1/2/3 storage, tape storage) you will see a reduction from 146.6kw to 32kw for the site.

Data centers can charge anywhere from $120~$225/kw, showing that we could potentially, if you believe the marketing material, see a savings of $20,000/month @ $180/kw. This would also represent 47 tons of carbon, using the Carbon Footprint Calculator.

Fusion .io reminds us that

“In 2006, U.S. data centers consumed an estimated 61 billion kilowatt-hours (kWh) of energy, which accounted for about 1.5% of the total electricity consumed in the U.S. that year, up from 1.2% in 2005. The total cost of that energy consumption was $4.5 billion, which is more than the electricity consumed by all color televisions in the country and is equivalent to the electricity consumption of about 5.8 million average U.S. households.

• Data centers’ cooling infrastructure accounts for about half of that electricity consumption.

• If current trends continue, by 2011, data centers will consume 100 billion kWh of energy, at a total annual cost of $7.4 billion and would necessitate the construction of 10 additional power plants. (from “Taming the Power Hungry Data Center”)”

When we consider the potential impact of data center consolidation through use of virtualization and cloud computing, and the rapid advancements of SSD technologies and capacities, we may be able to make a huge positive impact by reducing the load Internet, entertainment, content delivery, and enterprise systems will have on our use of electricity – and subsequent impact on the environment.

Of course we need to keep our eyes on the byproducts of technology (e-Waste), and ensure making improvements in one area does not create a nightmare in another part of our environment.

Some Additional Resources

StorageSearch.Com has a great listing of current announcements and articles both following and describing the language of the SSD technology and industry. There is still a fair amount of discussion on the quality and future direction of SSDs, however the future does look very exciting and positive.

For those of us who can still read the Hollerith coding on punch cards, the idea of >1.25TB on and SSD is abstract. But abstract in a fun, exciting way.

How do you feel about the demise of disk? Too soon to consider? Ready to install?

John Savageau, Long Beach

How Green is Your Data Center?

Data Center “X” just announced a 2 MegaWatt expansion to their facility in Northern California. A major increase in data center capacity, and a source of great joy for the company. And the source of potentially 714 additional tons of carbon introduced each month into the environment.

Think Green and EfficientMany groups and organizations are gathering to address the need to bring our data centers under control. Some are focused on providing marketing value for their members, most others appear genuinely concerned with the amount of power being consumed within data centers, the amount of carbon being produced by data centers, and the potential for using alternative or clean energy initiatives within data centers. There are stories around which claim the data center industry is actually using up to 5% of power consumed within the United States, which if true, makes this a really important discussion.

If you do a “Bing” search won the topic of “green data center,” you will find around 144 million results. Three times as many as a “paris hilton” search. That makes it a fairly saturated topic, indicating a heck of a lot of interest. The first page of the Bing search gives you a mixture of commercial companies, blogs, and “ezines” covering the topic – as well as an organization or two. Some highlights include:

With this level of interest you might expect just about everybody in the data center industry to be aggressively implementing “green data center best practices.” Well, not really. In the past month the author (me!) toured not less than six commercial data centers. In every data center I saw major best practices violations, including:

  • Large spacing within cabinets forcing hot air recirculation (not using blanking panels, as well as loose PCs and tower servers placed adhoc within a cabinet shelf)
  • Failure to use Hot/Cold aisle separation
  • High density cabinets using open 4 post racks
  • Spacing in high density server areas between cabinets
  • Failure to use any level of hot or cold air containment in high density data center spaces, including those with raised floors and drop-ceilings which would support hot air plenums

And other more complicated issues such as not integrating the electrical and environmental data into a building management system.

The Result of Poor Data Center Management

The Uptime Institute developed a metric called Power Utilization Efficiency (PUE) to measure the effectiveness of power usage within a data center. The equation is very simple, the PUE is the total facility powe3r consumption divided by the amount of power actually consumed by either internal IT equipment, or in the case of a public data center customer-facing or revenue-producing energy consumed. A factor of 2.0 would indicate for every watt consumed by IT equipment, another watt is required by support equipment (such as air conditioning, lighting, or other).

Most data centers today consider a target value of 1.5 good, with some companies such as Google trying to drive their PUE below 1.2 – an industry benchmark.

Other data centers are not even at the point where they can collect meaningful PUE data. The previous Google link has an extended description of data collection methodology, which is a great introduction to the concept. The Uptime Institute of course has a large amount of support materials. And a handy Bong search reveals another 995,000 results on the topic. No reason why any data center operator should be in the dark or uniformed on the topic.

So let’s use a simple PUE example and carbon calculation to determine the effect of a poor PUE:

Let’s start with a 4 MW data center. The data center currently has a PUE of 3.0, meaning of the 4 MW of power consumed within the data center 3MW are consumed by support materials, and 1MW by actual IT equipment. In California, using the carbon calculator, this would return 357 tons of carbon produced by the IT equipment and 1071 tons of carbon produced by support equipment such as air conditioning, lights, poorly maintained electrical equipment, etc., etc., etc…

1071 tons of carbon each month, possibly generated by waste which could be controlled through better design, management, and operations in our data centers. Most commercial data centers are in the 4~10MW range. Scary.

The US Department of Energy recently did an audit entitled “Department of Energy Efforts to Manage Information technology in an Energy-Efficient and Environmentally Responsible Manner,” which highlights the fact even tightly regulated agencies within the US Government have ample room for improvement.

“We concluded that Headquarters programs offices (which are part of the Department of Energy’s Common Operating Environment) as well as field sites had not developed and/or implemented policies and procedures necessary to ensure that information technology equipment and supporting infrastructure was operated in an energy-efficient manner and in a way that minimized impact on the environment.” (OAS-RA-09-03)

What Can We Do?

The easiest thing to do is quickly replace all traditional lighting with low power draw LED lamps, and only use the lamps when human beings are actually within the data center space working. Lights generate a tremendous amount of heat, and consume a tremendous amount of electricity. Heat=air-conditioning load if that wasn’t already obvious. Completely wasted power, and completely unnecessary production of carbon. If you are in a 10,000sqft data center, you may have 100 lighting fixtures in the room. Turn them off.

If your data center requires security cameras 24×7, consider using dual-mode cameras that have low light vision capability.

Place blanking panels in all cabinets. Considering removing all open racks from your data center unless you are using them for passive cabling, cross-connects, or very low power equipment. Consider using hot or cold aisle containment models for your cabinet lineups. Lots of debate on the merits of hot aisle containment vs. cold aisle containment, but the bottom line is that cool air going into a server makes the server run better, reduces the electrical draw on fans, and increases the value of every watt applied to your data center.

Consider this – if you have 10 servers using a total of 1920 watts (120v with a 20 amp breaker <at 16 amps draw>), that gives you the potential of running those 10 servers at full specification draw. That includes internal fans which start as needed to keep internal components cool enough to operate within equipment thresholds. If the server is running hot, then you are using your full 192 watts per server. If the server is running with cool air on the intake side, no hot air recirculation producing heat on the circuit boards, then you can reasonably expect to reduce the electrical draw on that component.

If you are able to reduce the actual draw each server consumes by 30~40% by removing hot air recirculation and keeping the supply side cool, then you may be able to add additional servers to the cabinet and increase your potential processing capacity for each breaker and cabinet by another 30~40%. This will definitely increase your efficiency, cost you less in electricity and power, give you additional processing potential.

Sources of Information

Quite a few sources of information, beyond the Bing search are available to help IT managers and data center managers. APC probably has the most comprehensive library of white papers supporting the data center discussion (although like all commercial vendors, you will see a few references to their own hardware and solutions). HP also has several great, and easy to understand white papers, including one of the best reviewed entitled “Optimizing facility operation in high density data center environments” – a step-by-step guide in deploying an efficient data center.

The Bing search will give you more data than you will ever be able to absorb, however the good news is that it is a great way to read through individual experiences, including both success stories and horror stories. Learn through other’s experiences, and start on the road to both reducing your carbon footprint, as well as getting the most out of your data center or data center installation.

Give us your opinions and experiences designing and implementing the green data center – leave a comment and let others learn from you too!

John Savageau, Long Beach

Telecom Risk and Security Part 4 – Facilities

A 40 year old building with much of the original mechanical and electrical infrastructure. A 40 year old 4000 amp, 480 volt aluminum electrical buss duct, which had been modified and “tapped” often during its life, with much of the work done violating equipment specifications. With the old materials such as buss insulation gradually deteriorating, the duct expanding and contracting over the years, the fact aluminum was used during the initial installation to either save money or test a new technology vision – it all becomes a risk. A risk of buss failure, or at worst a buss failing to the point it results in a massive electrical explosion.

Facility ExplosionSound extreme? Now add a couple of additional factors. The building is a mixed use-telecom carrier hotel, with additional space used for commercial collocation and standard commercial office space. This narrows it down to most of the carrier hotel facilities in the US and Europe. Old buildings, converted to mixed-use carrier hotel and collocation facilities, due mainly to an abundance of vacant space during the mid-1990s, and a need for telecom interconnection space following the Telecommunications Act of 1996.

Over the past four years the telecom, Internet, and data center industry has suffered several major electrical events. Some have resulted in complete facility outages, others have been saved by backup systems which operated as designed, preventing significant disruption to tenants and the services operated within the building.

A partial list of recent carrier hotel and data center facility outages or significant events include some of the most important facilities in the telecom and Internet-connected industry:

  • 365 Main in San Francisco
  • RackSpace hosting facilities in Dallas
  • Equinix facilities in Australia and France
  • MPT in San Jose
  • IBM facility in NZ
  • Fisher Plaza in Seattle
  • Cincinnati Bell

And the list goes on. Facilities which are managed by good companies, but have many issues in common. Most of those issues are human issues. The resulting outages caused havoc or chaos throughout a wide range of commercial companies, telecom companies, Internet services and content.

The Human Factor in Facility Failures

Building a modern data center or carrier interconnection point follows a fairly simple series of tasks. Following a data center design and construction checklist, with strict compliance to the process and individual steps, can often mean the difference between a well-run facility and one that is at risk of failure during a commercial power outage, or systems failure.

In the design/construction phase, data center operators follow a system of:

  • Determining the scope of the project
  • Developing a data center design specification based on both company/industry standards
  • Designing a specific facility based on business scope and budget, which will comply with the standard design specification
  • Publish the design specification and distribute to several candidate construction management companies and engineering companies
  • Use a strong project manager to drive the construction, permitting, certification, and vendor management process
  • Complete systems integration and commissioning prior to actual operations

Of all the above tasks, a complete commissioning plan and integration test is essential to building confidence the data center or telecom facility will operate as planned. Many outages in the past have resulted from systems that were not fully tested or integrated prior to operations.

Facility ChecklistAn example may be a breaker coordination study. This is the process of ensuring switch gear and panel breakers from the point of electrical presentation by the local power utility down to individual breaker panels are set, tested, and integrated according to vendor specification. Without a complete coordination study, there is no assurance components within an electrical system will either operate correctly during normal conditions, or operate correctly during equipment failures. An essential component of a complete systems integration test. Failure to complete a simple breaker coordination study during commissioning has resulted in major electrical failures in data centers as recently as 2008.

The InterNational Electrical Testing
Association (NETA) provides guidance on electrical commissioning for data centers under “full design load” conditions. This includes testing recommendations to test performance and operations including the sequence of operations for electrical, mechanical, building management systems/BMS, and power monitoring/management. The actual levels of NETA testing are:

  • Level 1- Submittal Review and Factory Testing
  • Level 2- Site Inspection and Verification to Submittal
  • Level 3- Installation Inspections and Verifications to Design Drawings
  • Level 4- Component Testing to Design Loads
  • Level 5- System Integration Tests at Full Design Loads

No company should consider collocation within a facility that cannot produce complete documentation that integration testing and commissioning was completed prior to facility operations – and that testing should be at NETA Level 5. In some cases, documentation of “retro” testing is acceptable, however potential tenants in a facility should be aware that is still a compromise, as it is almost impossible to complete a retro-commissioning test in a live facility.

Bottom Line – even a multi-million dollar facility has no integrity without a detailed design specification and complete integration/commissioning test.

The Human Factor in Continuing Facility Operations

Assuming the facility adequately completes integration and commissioning at NETA Level 5, the next step is ensuring the facility has a comprehensive continuing operations plan to manage their electrical (and mechanical/air conditioning) systems. There are two main recommendations for ensuring the annual, monthly, and even daily equipment maintenance and inspection plans are being completed.

Computerized Maintenance Management System (CMMS)

Data centers and central offices are complex operations. Thousands of moving parts, thousands of things that can potentially break or go wrong. A CMMS system tries to bring all those components together into an integrated resource that includes (according to Wikipedia)

  • Work orders: Scheduling jobs, assigning personnel, reserving materials, recording costs, and tracking relevant information such as the cause of the problem (if any), downtime involved (if any), and recommendations for future action
  • Preventive maintenance (PM): Keeping track of PM inspections and jobs, including step-by-step instructions or check-lists, lists of materials required, and other pertinent details. Typically, the CMMS schedules PM jobs automatically based on schedules and/or meter readings. Different software packages use different techniques for reporting when a job should be performed.
  • Asset management: Recording data about equipment and property including specifications, warranty information, service contracts, spare parts, purchase date, expected lifetime, and anything else that might be of help to management or maintenance workers. The CMMS may also generate Asset Management metrics such as the Facility Condition Index, or FCI.
  • Inventory control: Management of spare parts, tools, and other materials including the reservation of materials for particular jobs, recording where materials are stored, determining when more materials should be purchased, tracking shipment receipts, and taking inventory.
  • Safety: Management of permits and other documentation required for the processing of safety requirements. These safety requirements can include lockout-tagout, confined space, foreign material exclusion (FME), electrical safety, and others.

And we can also add additional steps such as daily equipment inspections, facility walkthroughs, and staff training.

SAS 70 Audits

The SAS 70 Audit is becoming more popular with companies to force the data center operator to provide audited documentation by a neutral evaluator that they are actually completing the maintenance, security, staffing, and permitting activities as stated in marketing and other sales negotiations.

Wikipedia defines a SAS70 Audit as:

“… the professional standards used by a service auditor to assess the internal controls of a service organization and issue a service auditor’s report. Service organizations are typically entities that provide outsourcing services that impact the control environment of their customers. Examples of service organizations are insurance and medical claims processors, trust companies, hosted data centers, application service providers (ASPs), managed security providers, credit processing organizations and clearinghouses.

There are two types of service auditor reports. A Type I service auditor’s report includes the service auditor’s opinion on the fairness of the presentation of the service organization’s description of controls that had been placed in operation and the suitability of the design of the controls to achieve the specified control objectives. A Type II service auditor’s report includes the information contained in a Type I service auditor’s report and also includes the service auditor’s opinion on whether the specific controls were operating effectively during the period under review.”

Many companies considering outsourcing within the financial services industries are now considering a SAS70 audit essential to considering candidate data center facilities to host their data and applications. Startup companies with savvy investors are demanding SAS70 audits. In fact, any company considering outsourcing their data or applications into a commercial data center should demand to obtain or review SAS70 audits for each facility considered.

Otherwise, you are forced to “believe” the words of a marketer’s spin, a salesman’s desperate pitch, or the words of others to provide confidence your business will be protected in another company’s facility.

You Have the Best Data CenterOne thing to keep in mind about SAS70 audits… The audit only reviews items the data center operator chooses to audit. Thus, a company may have a very nice and polished SAS70 audit documentation, however the contents may not include every item you need to ensure the data center operator has a comprehensive operations plan. You may consider finding an experienced consultant to review the SAS70 document, and provide any additional guidance on whether or not the audit actually includes all facility maintenance and management items needed to ensure continuing protection from mechanical, monitoring/management, electrical, security, or human staffing failures.

Finally, Know Your Facility

Facility operators are traditionally reluctant to show a potential customer or tenant their electrical and mechanical diagrams and “as-built” documentation for the facility. This is the point you would find a 40 year old aluminum buss duct, single points of failure, and other infrastructure designs and realities you should know before putting your business into a data center or carrier hotel.

So, when all other data center and carrier hotel facilities appear equal, in geography and interconnections, look at facilities which will incur the least impact if your interconnections are disrupted, and demand your candidate data center operator and hosting provider are able to provide you complete documentation on the facility, commissioning, CMMS, and SAS70.

Your business, the global marketplace, and network-connected world depend on forcing the highest possible standards of facility design and operation.

John Savageau, Long Beach

Other articles in this series include:

%d bloggers like this: