Guest Post by Tyler Constable

Your data is only as safe as your data center. Your IT landscape is running on one or more servers somewhere, and those servers can be damaged by almost anything — a break in, a natural disaster — even a static shock caused by excessively dry air. Unfortunately, companies that are very careful about logical security still neglect physical security. Here’s what’s required to secure a data center footprint, and why so many organizations get it wrong.

Data Center Monitoring and Performance Analytics Requires Precise Control

Data center environmental monitoring should maintain humidity between 45% and 60%. If the air becomes too humid, water can condense on cooling systems or near the ground, potentially damaging servers and other equipment. If it’s too dry on the other hand, it can cause static to build up, which can discharge and fry electronics.

Other environmental factors, like heat and airflow, also need to be carefully controlled. To do that, you need 24/7 supervision, along with redundant data center environmental monitoring equipment, so there’s always a backup in place when a thermometer or humidity gauge fails.

Data centers also need backups for core systems, like networking and fire suppression. That way, if something fails (or returns a sensor reading that indicates it may be about to fail) the redundant system can pick up the slack. The goal is to be able to keep the IT landscape up and running with little to no disruption, no matter what goes wrong.

Screenshot (99)

Poor Data Center Design Undermines Monitoring

Although industry standards account for how data centers should operate, they tend to overlook flaws in the facilities themselves. Too many data centers started their lives as warehouses or office buildings.

This poses additional risks for environmental monitoring, and raises the operational cost of environmental control; if the building is poorly sealed and insulated, it’s harder to control humidity and temperature. Cracks can let unpredictable bursts of humid air into the building, creating spots where condensation can form, or even form leaks.

Converted buildings are also difficult and expensive to protect against disasters. They may be built in areas vulnerable to earthquakes or fires, using outdated construction methods. They also tend to be less secure; often, features like hollow-core walls, false ceilings, and multiple entry and exit points make it much harder to prevent unauthorized entrance. Properly installing internal access control, server cages and other security features can be prohibitively expensive, and many providers cut corners.

Businesses Need Better

In the last decade, businesses have gone from poorly implemented tape backups to carefully planning disaster recovery with RTO and RPO. Customers have learned why good DR is important, and how to ask the right questions.

Disaster center monitoring and security needs to go through the same evolution. Enterprises need to familiarize themselves with existing metrics like TIA-942 and Uptime Institute tiers, and the importance of external SSAE 16 compliance auditing.

While many companies have some sort of monitoring in place it’s usually at the host or application level.  Physical datacenter monitors such as heat, intrusion and moisture often are controlled on a set of additional monitors.  A true solution would be to have ALL monitoring components reporting back to one centralized monitoring application.  Any break away from this strategy can lead to confusion or even missed critical checks.

The auditing standards need to incorporate deeper physical security and safety assessment. If a building can withstand a hurricane, or survive an armed attack, that data should be available to customers. Likewise, if a data center is at risk of a major disaster. Although not every company needs the same level of protection, each needs a realistic assessment of what risks it faces — something much of the hosting industry still doesn’t provide.