Blog

post_thumb
December 11, 2017

Hyperconvergence Only Solves Half the Problem// Xperts Tips Series

Enterprises and service providers constantly worry about loss of time, money and unscheduled downtime and look for new or better ways to overcome the conflicting demands for higher performance with lower costs.

 

when problems do occur (and they always do), system administrators need to avoid long, drawn-out processes of fault identification and root cause analysis. To increase performance, IT pros need agile solutions that can redirect applications and reallocate storage and computing resources to ensure a reliable, scalable platform. Furthermore, to meet or exceed service level agreements (SLA), IT experts need to deliver real time reports and trend analysis that will enable the IT team to be proactive, identify maintenance requirements, and document the business impact.

 

A Closer Look

Take for example an IT project we were recently working on with an international manufacturer of defense systems, whose equipment often integrates with high-performance computing platforms. As part of a client contract, their team had been asked to develop and install a Hyperconverged infrastructure (HCI), a pre-loaded and pre-configured server rack consisting of hardware and software for compute, storage, networking, virtualization, containers, data streaming components and a range of other new technologies. A crucial part of the project included having tools that would ensure consistent performance and reliability over time.

 

The vendor needed a way to evaluate the entire HCI system (all software applications, networking, and infrastructure components) that would recognize problems in real-time, isolate root causes, identify trends and foresee impending issues before they caused performance degradation and downtime. Furthermore, they needed a solution that displayed the “health status” for each business process so that system administrators could recognize the operational implications of various problems, and assign the right resources to solve the most critical issues.

 

 

How to Monitor The Entire IT System and Associate Each Layer of Technology To Individual Business Processes?

 

Traditional infrastructure and operations (I&O) tools alone lack the comprehensive coverage of these various IT domains – applications, Big Data, operating systems, database, storage, compute, security, networking, etc. So, while HCI architectures will accelerate implementation by simplifying storage and hardware decisions, by itself an HCI approach lacks the required performance monitoring and analytics of software layers, where 70-80% of performance problems occur.

 

Without an end-to-end view of the entire IT system, and an understanding of how different technology layers impact one another, administrators must scramble to trace the propagation of problems (cause-and-effect). This leads to poor first-time-fix rates (FTF), more no-trouble-found events (NTF), and longer meantime-to-repair (MTTR).

 

With the growing complexity of IT environments, especially hybrid computing models (consisting of private Cloud, public Cloud, and on-premise), the need to solve the enterprise-level monitoring and analytics problem has become even greater.

 

 

What Can Be Done to Optimize complex IT Environments?

 

The only way IT experts can manage tech complexity, reduce costs, and get higher performance, is through a unified IT performance monitoring and analytics approach.

 

Unified IT performance monitoring delivers a consolidated view of overall service levels by evaluating all layers of the technology stack, including applications, Big Data, operating systems, database, storage, compute, security, networking, Cloud, Edge, and IoT/IIoT devices.

 

Unified IT performance analytics provides reports that reveal dependencies, correlations, and trends of operational issues before they happen. This approach essentially provides an early warning system to problems, along with corrective action tools that quickly isolate defects and identify root causes. As a result, organizations can capture a real-time picture of the complete IT stack and ensure that service levels are consistent with contractual agreements.

 

Keeping this in mind, lets go back to our customer example above. Our defense systems manufacturer IT Operations team had three potential paths worth consideration.

 

 

Three Potential Paths to Victory

 

The first path was a combination of built-in (often packaged with infrastructure devices and software applications) and 3rd party monitoring tools (commercial performance monitoring tools) together with a layer of custom code written on top to integrate the pieces, implement business rules, and provide the high-level dashboard that was required.

 

A “combo pack solution” like this however would be very time consuming, risky, and expensive for the customer. In addition, it would require an ongoing consulting agreement with the vendor to ensure the custom code continued to perform as expected.

 

The second path was to implement an integrated monitoring suite from one of the large software vendors. A ‘suite solution’ looked good at first but in the end was too expensive and limiting because each IT layer would require purchasing and managing a separate software module. All too often though critical monitoring features like these were not going to be available until the “next” release.

 

IT team’s third option was the unified IT and operations performance monitoring and analytics solution from Centerity.

 

 

Be on The Right Path

 

Centerity was considered cost effective because a single platform covered all technology layers (hardware and software), and pricing was not based on separate modules but on the number of metrics (or KPIs) that needed to be monitored.

 

The technology evaluations also revealed that Centerity’s risk profile was low because the system was up and running within a matter of hours, including monitoring and root cause analysis. Indeed, by the end of a week-long test the entire HCI system was being optimized based on Centerity’s analytics and trend reports.

 

 

After implementing Centerity’s end-to-end IT performance monitoring and analytics platform, our client and his customer finally had real-time access to critical information that allowed them to manage and improve service levels and customer satisfaction.

 

Since implementing Centerity, reports show that the fault isolation and root cause analysis capabilities have reduced MTTR on the HCI system by as much as 80% over previous systems.

 

Centerity’s unified software appliance was the only solution that could cover the entire IT stack on the Hyperconverged system and let our customer experience the performance and reliability they expected.

 

For further information or questions about how to improve IT performance via unified IT performance analytics solution, please schedule a demo for how Centerity can help with your IT needs.

 

 

 

 

 

 

 

 

 

 

 

 

 

Author__________________________________________________________________________________________________________________

 

Diti Clayton, is an Alliance Manager in Centerity focusing on engaging with IT experts and decision makers, channel partners, and technology vendors to bring incredible joint solutions to new customers.

 

 

Linkedin / Contact

 

 

 

About Centerity

Centerity is a chosen vendor for leading complex hybrid IT industries as VCE Vblock, VCE VxRail, Smartstack, Nutanix and Flexpod. Centerity’s award winning software provides a Unified enterprise-class IT performance analytics platform that improves performance and reliability of business services to ensure availability of critical systems. By delivering a consolidated view across all layers of the technology stack including, applications, Big Data, operating systems, database, storage, compute, security, networking, Cloud, Edge, and IoT/IIoT devices, Centerity provides an early warning of performance issues along with corrective action tools to quickly isolate faults and identify root causes.