Sizing Up Hyperconverged Intelligence

March 4, 2019 | Posted by

Hyperconverged infrastructure (HCI) is already exceeding $5 billion annually and infrastructure and operations leaders focused on modernization and agility are beginning to prioritize vendors that add intelligence into their HCI solutions. One example can be found in the area of quality of service (QoS). But, these leaders need to base their HCI evaluation on solutions that can go above and beyond QoS to meet their growing need for automation and intelligence.

Go Beyond QoS Basics

Going beyond simple QoS, which is based on performance caps and bursting, is driven by the need to consolidate more applications on HCI with reduced complexity. This can be accomplished with a policy-based approach to QoS. One that uses artificial intelligence and machine learning to make real-time decisions to orchestrate and automate data placement, data path management and workload prioritization to achieve the application workload SLA selected by the system admin.

This level of automation means the system can make real-time decisions based on its understanding of your business priorities, resulting in set-and-forget application SLA management. All system admins have to do is apply one of five pre-defined performance policies to a storage volume, datastore or VM and the HCI system auto-magically takes it from there.  

This was the design objective behind performance QoS in Pivot3’s Intelligence Engine.  

It’s About Performance (And Security. And Protection.)

So, what should you be looking for in policy-based QoS? For starters, you have to take a closer look at each vendors’ QoS. A main area of focus is performance. While storage performance is often the pain with application performance, it isn’t IOPs that kills applications, it’s latency. Any QoS must have the capability to consistently manage latency. Certain vendors might address this by managing network latency, but the network pipes are now so big, 25GB, 40GB, 100GB etc., that network latency becomes irrelevant, so it comes down to the slowest factor – storage. 

Addressing storage latency by applying a simple policy does two things: it fixes application performance problems and eliminates complexity. But also, in the process, a policy-based approach gives customers the confidence to consolidate multiple mixed application workloads on HCI and the get the peace of mind that their most important applications will be guaranteed to perform as needed.  

But it’s not just about performance. Policy-based management can be applied to data protection and security to simplify those tasks as well. A pre-defined or a user-built library of policies can be applied that define snapshot schedules and retention periods based on business needs that simplify the tedious task of ensuring proper data protection.

Data at rest encryption with automated key management is also accomplished via simple policy assignments to ensure data security and compliance to reduce vulnerabilities and adhere to increasingly strict regulations, like GDPR, HIPAA, CJIS and FedRAMP, to name a few. A report from 451 Research sums this up nicely, “For security, the use of automation to eliminate or reduce human error is even more valuable than in other areas. Automated, policy-based security tools should be considered essential in any HCI platform, covering functions such as data encryption, key management and access control.”

Download the 451 Research Report Here.

Want to see where some popular HCI solutions diverge on QoS? Take a look at the article by DCIG or contact an HCI expert now.

Recent Posts

Blog Categories

Dealing with IT Complexity?

Simplify with Intelligence.

Contact Us Today