The benefits of desktop and application virtualization are well known. Improved application and desktop security, efficiencies gained through backup, upgrade, and patch centralization, better overall manageability, energy and resource savings, and the ability to deliver applications unconstrained by endpoint OS are all common reasons cited by enterprises for adoption.
There are a handful of varieties of what we can collectively term hosted Windows applications and desktops. There are single user and multiple user VMs running on premises (i.e. the enterprise datacenter), single user and multiple user VMs running in the cloud, and a several flavors of hosted DaaS options.
For the purposes of this discussion, I’ll lump all those together and call them RDP/VDI. Irrespective of the various flavors, an essential ingredient is that you have an application or an OS instance running in one place and you have an interacting user in another location. Data is flowing between the user device and the workload. In fact, unless data is flowing between the two points, no work is being done at all.
The Network Is Often Overlooked
Given the pivotal role of the network in the performance of RDP/VDI, it’s often given short shrift when planning a deployment. Typically, you’ll see best practice guides focus on server hardware, licensing, application integration, data migration and synchronization, training, cutover and server deactivation (not that those aren’t all important areas). But, quoting a hyperconvergence networking presentation at a recent SNIA event, “networking is often overlooked, as it is assumed to always ‘be there’”.
Despite the relative lack of attention during the planning phase, network issues commonly create real operational problems for RDP/VDI. A recent survey of IT execs found that the top five performance-related RDP/VDI user complaints seen are slow applications, slow logons, stuck sessions, multimedia playback performance, and session disconnects. Arguably any of these complaints could be due to network issues, and this is underscored by “network problems” being identified by respondents as one of the top three sources for performance issues.
The Impact Of Network Issues On RDP/VDI
To quantify the impact of the problems, the largest plurality of respondents indicated that they spend between 1 and 3 days a month troubleshooting performance problems. Over a month of their working year dedicated to troubleshooting RDP/VDI performance. Yikes.
When networking is called out as a key element in planning guidelines, the advice is frequently to “upgrade your circuits”, which is a practical if anodyne suggestion. In many cases, though, organizations are either deploying their RDP/VDI workloads in the cloud or moving away from MPLS to SD-WAN technologies that rely on Internet transport between datacenter and branch. So while in these cases there is no “circuit” to upgrade, there is a very definite need for circuit-like performance and consistency to ensure adequate results for RDP/VDI.
The performance of RDP/VDI is sensitive not only to throughput, which naturally affects the number of simultaneous sessions that can be reliably maintained, but also to latency and loss. Excessive latency can create nagging issues like keyboard and mouse lag, but more acutely can also result in session disconnection or “stuck” sessions.
This is a big deal for user productivity. A user who experiences a transient issue with a single application might be expected to shift their effort to a different task that doesn’t rely on the problem application. But if all the user’s relevant apps, or their entire desktop, are delivered through the session then they aren’t doing any work until the session is restored.
This means that the key latency metric for networks supporting RDP/VDI isn’t average latency, it’s the standard deviation in measured latency. In other words, consistency and predictability in latency is vital for good results and a satisfactory user experience.
Real-World Perspectives
Take Teridion customer CYBERGYM. They’re a cyber-security training organization that runs real time “arenas”- environments in which trainees defend in real-time against live attacks. Trainees connect through VDI to the arena. When they expanded their service to Asia, they were immediately faced with a rash of RDP disconnections. On average, each trainee was experiencing 10 disconnections during a single day’s session, and these disconnects were the direct result of latency spikes that occurred regularly due to the latency “weather” across the Internet.
Fig 1. As I was writing this post, latency across the map looked ok, and I was concerned that I wouldn’t get a good screenshot of high latency in time to publish. I needn’t have worried. A few minutes later BOOM. The Internet never disappoints. Or always disappoints, depending on your perspective.
Teridion customer Cohesity faced similar challenges running VMWare Horizon between their San Jose datacenter and their engineering teams in Bangalore. For Cohesity and CYBERGYM, the solution to the problem was Teridion’s cloud WAN service, which among other benefits delivers low, predictable latency between enterprise sites and from the enterprise to cloud resources, without requiring dedicated circuits.
Fig 2. For RDP/VDI performance, reducing the average latency is ok, but the real improvements come from delivering *consistently* good latency
Ensure That You Plan Right
When it comes to optimizing your RDP/VDI environment for adequate bandwidth and low latency, you have options:
- Deploy a network overlay like Teridion’s cloud WAN service.
- Retain a circuit-based MPLS infrastructure to assure consistent performance.
- Deploy dedicated network connections to the RDP/VDI workloads (for example, ExpressRoute to a DaaS service hosted on Microsoft Azure).
- Use SD-WAN application-based routing to choose the optimal network path, which provides improvements at the edge but lacks end-to-end optimization.
- Reduce competing network traffic and network latency through VDI infrastructure placement or DaaS cloud instance selection- in other words, get the workloads closer to the users.
Each of these alternatives naturally has pros and cons, and you may find that a combination of options is necessary from a performance or budgetary standpoint to deliver the goods. Just ensure that, given the central role that the network plays in desktop virtualization, you make research and selection of the right alternative a key part of your planning.