Top Tips: Ensuring a successful VDI deployment

Five practices to take into consideration when deploying your VDI

[image_library_tag 0d189f0d-51d4-4550-a15a-1bd4c65dcd60 111x138 alt="30-09-2015-ensuring-a-successful-vdi-deployment" title="30-09-2015-ensuring-a-successful-vdi-deployment" width="111" height="138"class="left "]Sundi Sundaresh is CEO of Xangati, an infrastructure performance management software innovator. Before becoming an advisor to technology companies and a board member of GridGain and SandForce, Sundi was the president and CEO of Adaptec, Inc. Prior to that, he was president and CEO of Candera Inc., an enterprise storage virtualization start up. 

Sundi shares his top tips for ensuring a successful VDI deployment.

A core benefit to VDI is that user data can be coalesced from centralized servers, rather than distributed across thousands of disparate – and frequently unsecured – physical desktop computers. While the cost and productivity benefits of VDI are obvious, end-users often cite the fear of VDI performance and user acceptance/experience as their primary issues. That’s not surprising given significant challenges such as the vastly different patterns of traffic created by VDI versus those of traditional physical desktops, the unpredictable demands made by applications and also the risks of “contention storms” that can occur leading to performance problems in large VDI deployments.

Every virtual desktop deployment offers an opportunity to enlist specific features that offer more predictive analysis to prevent problems before they impact the end user. The five steps below explore capabilities inherent in VDI that can make a real difference in how you successfully exceed Quality of Experience (QoE) expectations.

Real-time, live data visualization - As an IT administrator, you should use frictionless mechanisms to collect metrics from the entire VDI infrastructure on a real-time, second-by-second basis so resource contention and user performance issues can be quickly identified and resolved. By using collectors that do not require agents or probes, you can perform continuous monitoring to enable real-time responsiveness. Alerts can now be generated instantaneously to identify the location and origin of an issue, and even to provide root cause analysis and recommendations for remediation.

Understand end-to-end application health and QoE impacts - Frictionless collectors allow you to gather second-by-second metrics culled from servers, switches and storage systems, while correlated in real time with respect to application health attributes.  Now you can triage a problem within the datacenter and out to client devices for a true end-to-end perspective. Add to that information about user behavior, the virtual apps they are using, as well as the shared components of those desktops are currently running on, and the picture becomes crystal clear, including granular data such as individual login and reconnect metrics.

Combine best-practice standards with self-learned alerting - There may be one contention within one hour or there may be five, or even 50. So, how do you make sure that each one is caught? The answer is live and continuous alerting by using a data collector that relies upon primarily API’s and protocols. Continuously monitoring activity allows appropriate alerts to be generated, and DVR-like recordings that capture when thresholds are crossed. Advanced analytics not only should look at data in real time, but also cross-reference that data with your own historical trends.

Understand cross-silo interactions to predict resource contentions - Cross-silo interactions can cause intolerable delays to end-user QoE, even for users who are outside of the silo, in which an apparent glitch has been discovered. But with an understanding of the interactions between the impact of applications against IT infrastructure components in the various silos, you can predict and prevent what specifically is causing an issue, such as contention for a single resource by several devices or applications.

Prescriptive analytics deliver business outcomes - As private cloud VDIs are constructed, degrading events like boot storms, V-motion and rich media I/O, all need to be serviced while ensuring consistent, predictable QoE by other users sharing the virtual infrastructure. VM storm alerts can be overwhelming for the IT administrator unless he or she has real-time visibility to the dependencies of all elements of the IT infrastructure. Leveraging a real-time analytics engine that pinpoints the root cause of degradation, resource contention storms can be tracked and pre-empted. Prescriptive steps should be provided in context so the VDI administrator can deliver predictable business outcomes, such as maintaining low-latency and service assurance targets.

Taking these five practices into consideration when deploying your VDI can make the process smoother and less time-consuming. VDI is a great business-agility asset, allowing organizations to minimize risk, enhance performance and greatly improve the end-user experience. However, no new VDI deployment is without risk unless predictive performance tools are fully integrated.