top-tips-header
Desktop Virtualization

Top Tips: Best practices for a high performance VDI deployment

S. “Sundi” Sundaresh is President and CEO of Xangati. A well-known technology executive with a proven track-record, Sundaresh has served in various capacities as CEO, general manager, board member (public and private) and senior executive at top companies, where he has delivered profitable business growth; completed mergers, acquisitions, divestitures, and raised capital. He has run both well-established and start-up organizations with revenues up to $1 billion.

Sundi shares his top tips for ensuring a high performance virtual desktop infrastructure deployment.

A benefit to VDI is that user data can be corralled on centralized servers, rather than distributed across thousands of disparate – and frequently unsecured – physical desktop computers. While the cost and productivity benefits of VDI are obvious, end-users often cite the fear of VDI performance and user acceptance/experience as their primary issues. That’s not surprising given significant challenges such as the vastly different patterns of traffic created by VDI versus those of traditional physical desktops, the unpredictable demands made by applications and also the risks of “contention storms” that can occur leading to performance problems in large VDI deployments.

So how do you ensure a successful VDI deployment? Here are my top five recommendations based on thousands of customer transactions and best practices derived from Xangati’s real-time performance optimization database.

Visualize in real time with high fidelity - A high performance management system needs to collect metrics from the entire VDI infrastructure on a real-time, second-by-second basis so resource contention and user performance issues can be quickly identified and resolved. Continuous monitoring enables real-time responsiveness where alerts can be generated instantaneously to identify the location and origin of an issue, and even to provide root cause analysis, predictions and recommendations for remediation.

Understand end-to-end application behavior - Second-by-second metrics collected from servers, switches and storage systems, while correlated in real time with respect to application health attributes, provide an excellent foundation for higher-level functions, which are provided by advanced analytics. Now you can divide a problem: within the datacenter and out to client devices for a true end-to-end perspective. Add to that information about users, the desktops they are using, as well as the groups and servers those desktops are currently running on, and the picture becomes crystal clear. So much so that potential problems can be anticipated, identified and remediated in record time.

Combine best-practice standards with self-learned alerting - There may be one contention within one hour or there may be five, or even fifty. So, how do you make sure that each one is caught? The answer is live and continuous alerting. Continuously monitoring activity allows appropriate alerts to be generated, and a DVR-like recording can be captured when thresholds are crossed. Advanced analytics not only should look at data in real time, but also cross-reference that data with your own history, as well as with experiences seen in similar environments.

Understand cross-silo interactions to predict resource contentions - Cross-silo interactions can cause intolerable delays to the end-user experience, even for users who are outside of the silo in which an apparent glitch has been discovered. But, with application-aware understanding of the interactions between IT infrastructure components in the various silos, you can predict and prevent what specifically is causing an issue, such as contention for a single resource by several devices or applications.

Prescriptive analytics deliver business outcomes - As private cloud VDIs are constructed, events like virtual desktop boot storms and virus storms, V-motion and rich media I/O, all need to be serviced while ensuring consistent, predictable performance by other users sharing the virtual infrastructure. VM storm alerts can be overwhelming for the IT administrator unless he or she has real-time visibility to the dependencies of all components of the IT infrastructure. Leveraging a real-time analytics engine that shows the VI admin the root cause of the storm, means storms can be tracked from when they first arrive through the stages where they are gaining momentum, and even up to the point where preventive action has to be taken. Prescriptive steps are provided so the IT administrator can deliver predictable business outcomes, such as guaranteed low-latency, and maintain service-level assurances.

Taking these five practices into consideration when deploying your VDI can make the process smoother and less stressful. VDI is a great organizational asset, allowing businesses to minimize risk, enhance performance and greatly improve the end user experience. However, no new VDI deployment is without risk if predictive performance is not a central component. In fact, an infrastructure performance intelligence solution should be engaged as early as possible in the process, not after the fact.

To address the five best practices outlined here, find a solution that provides app-aware, continuous, scalable performance intelligence – and doesn’t just tell you that a problem exists, but tells you how to fix it before it seriously impacts the end user.

PREVIOUS ARTICLE

« Windows 10: Microsoft's last hoorah or curtain-raiser to a second act?

NEXT ARTICLE

Windows 10: Microsoft's realistic answer to changing times »
author_image
IDG Connect

IDG Connect tackles the tech stories that matter to you

  • Mail

Poll

Do you think your smartphone is making you a workaholic?