by Evan Powell
Desktop virtualization—broadly defined as the use of a program to abstract software, applications, and other miscellaneous data―has carved out its own sector within the IT world. In many instances, complexity and cost overruns, due to storage, seem to plague many Virtual Desktop Infrastructure (VDI) deployments. How should IT respond to management to ensure real value from these VDI deployments? In this piece, we review VDI’s unique issues, especially focusing on how IT can utilize this powerful technology and reduce IT difficulties associated with a VDI rollout.
With VDI, IT departments can manage thousands of individual desktops easily on a single, centrally located computer or server reduce the complexity of software upgrades and patches; and streamline the adoption of new applications. Security is enhanced as sensitive data is stored on the server—vastly mitigating the risk from employees’ mobile devices. And implementing VDI can yield significant cost and time savings for IT departments.
A good number of prominent technology innovators already are seeing VDI’s value. For example, HP shifted its focus in August 2011 from traditional PC desktops to VDI. VMware, a provider of virtualization software since its 1998 founding, has been leading the charge for years.
After installation, VDI environments are in a constant state of flux as new servers, hypervisors, and storage systems are brought online, and systems are adjusted as personnel numbers change and new technology, in the form of applications, are added or deleted. How can VDI add real value to the enterprise? Five key issues must be addressed to prove VDI superiority over traditional desktop-centric approaches: compatibility, cost, storage, scalability, and analytics.
Some external SANs were not designed with VDI in mind. Applications, user data, and system files need to be stored somewhere. The real issue is with the operating system disk image file locations for each individual virtual machine. Getting data to and from the SAN without any issues can be daunting. Servers typically need low-latency access on account of performance. How the servers and the SAN connect is of the utmost importance.
One solution to this problem (and for many of the following issues) is to run an experimental test trial. It is absolutely crucial for your organization to maintain uptime. If something fails, expect loud complaints from both your end users and your management. By employing pre-go-live testing, hurdles are identified and resolved before the system becomes available companywide.
The traditional storage model always has been to select a vertically integrated storage provider, which offers software and hardware in one package. But vendor lock-in can result in serious cost overruns. In recent years, OpenStorage has emerged, using the Z file system (ZFS) technology (originally developed by Sun Microsystems), to run off of commodity hardware (usually industry-standard x86 servers). Being hardware-agnostic completely eliminates vendor lock-in, which allows organizations to avoid onerous and inconveniently timed cost hikes.
For every dollar spent on VDI deployments, anywhere from three to seven dollars is spent on storage. Enterprise storage, for example, now accounts for more than 40 percent of total IT spending. Organizations, both big and small, need to keep these costs down in order for VDI to become successful and to obtain buy-in from management and key enterprise stake holders.
Storage is a key component of all businesses. Unfortunately, VDI’s storage requirements are often complex, and estimating them can be a full-time job in itself. Quite often, IT departments lack real-time storage requirement data from their infrastructure, relying upon mere theoretical or anecdotal analytics.
The primary (though certainly not the only) bottleneck is the IOPS read/write ratio. It’s obviously more taxing from a storage perspective to write, rather than to read (that’s true for any type of storage system), so this dramatically increases the number of disks required to store data correctly. It varies by user and by the application running, but not having a basic idea of the amount of disks required can cause serious issues for IT departments.
I would recommend the following common sense best practices for deploying VDI:
- Apply SSDs in a Hybrid Storage Pool approach, hosting
temporary storage for virtual machines on SSDs;
- Use default profiles as a starting point. For VDI
workloads, most configurations benefit from a 4K block size, ZFS compression,
and deduplication off. Configure RAID 10 or RAIDZ-2 to protect against disk
- Configure for the appropriate level of desktop
persistence and availability. The type of desktop image needed for each
user—floating or dedicated—depends on the nature of the user’s role and
- Implement disk deduplication in a VDI environment,
since reducing the size of stored data can optimize the cache to perform
effectively. This accelerates overall performance and increases capacity;
- Apply mirroring to the “Golden Image” to improve
- Use a separate network for NFS-attached storage.
Despite its many benefits, quickly scaling a VDI solution can result in lower performance benchmarks, leading to a frustrating end user experience.
Consolidating hundreds of desktops onto one server can result in high IOPS demand, performance drawbacks, and boot storm issues. (Boot storm is when hundreds or thousands of users log in to applications simultaneously in VDI environments.) Normally, legacy storage presents a sequential load and a read/write ratio of about 80/20. When many desktops are combined with VDI, the pattern shifts from sequential to random, with read/write ratios of 50/50 or worse. Performance suffers, associated costs rise (expectedly or not), and the overall IT structure suffers.
To be on the safe side with a scalable system, you’ll want to have a safety margin just in case users access applications that are high in memory needs. And the operating system matters, too. In a normal Windows XP environment, a heavy virtual machine user might use 12-16 IOPS per VDI client. But a heavy VM user in Windows 7 uses 14 to 20 IOPS per VDI client.
Even if everything scales well with your VDI deployment, watch out for “orphaned storage”, which is when a great deal of your capacity is not consumed, wasting precious resources in the process, and slowing down overall performance. There is plenty of capacity at your disposal, but the system becomes bottlenecked with your I/O. The solution for this is a window into real-time analytics that VDI can provide.
After your successful VDI deployment, the continued support for end users requires monitoring analytics to mitigate problems in real time. Unfortunately, this is an area of VDI that still is growing and needs additional input and development.
One analytical issue that you can be on the lookout for is boot storms. These cause network throughput to drag, along with SAN I/O, and hinder performance. Boot storms can last anywhere between 30 minutes and two hours. I recommend using single vCPUs, reducing 3D screen savers, staggering antivirus disk checks, and enabling “timed boots” (where the machine already is running upon the users’ arrival to their desks) to help alleviate the danger of these storms.
To conclude, VDI, in all its different forms and uses, already is a very powerful and useful tool, which can yield incredible benefits quickly through centralization, better management, and security protocols. It’s ironic, but centralizing your entire infrastructure actually makes it more flexible!
Being able to manage a huge number of desktops through the click of a mouse should allow IT departments to sleep easier. But don’t think that VDI can be thrown into the IT mix and left alone. To demonstrate real value to management and to end users, VDI has to be introduced sensibly, monitored in real-time for ongoing system and personnel changes, and reviewed for optimal production. That said, if done correctly, VDI will be a game changer for many organizations.
Evan Powell is the CEO of Nexenta Systems (Santa Clara, CA). www.nexenta.com