Skip navigation
Technology Insights for Leaders in State & Local Government
 How to Deploy Virtual Storage Appliances

Image courtesy of Stuart Miles /

How to Deploy Virtual Storage Appliances

Make the most of storage virtualization and unused disk space with these pointers.

posted January 11, 2013  |  Appears in the Winter 2013 issue of StateTech Magazine.

Virtual storage appliances (VSAs) give traditional disk arrays a run for their money. Available from manufacturers such as HP, NetApp and VMware, the software spreads data across unused disk space in multiple servers to provide redundancy, fault tolerance and continuity of operations.

Virtualized storage delivers many of the benefits of storage area networks at a relatively low cost. What's more, VSAs typically require less power and cooling than standard SANs. Follow these tips for deploying virtual storage systems and using them to their full potential.

1. Ensure the network speeds are sufficient.

Be careful to match the links between servers with the types of applications they'll support. Disk I/O between servers travels over a network connection. This can be either a Gigabit Ethernet, 10 Gig-E or WAN connection ranging anywhere from 56 kilobits per second modem speed to 100Mbps.

IT managers who are running applications with heavy disk I/O will want the fastest connection they can afford; otherwise, data replication will slow the system to a crawl.

2. Examine requirements for matching disk space on each server.

Some systems can use whatever unused disk space is available on each system. Others require that all partitions be the same size; for example, 500 gigabytes per server. Designating a fixed amount of storage will limit the size of the system to that of the server with the least available unused space. If the space varies widely, consider leaving one system out of the VSA or look for software that allows partitions to differ in size.

3. Carefully consider the types of disks in a virtual array.

If an organization has three servers with speedy 15,000-rpm SAS disks, and another with 7,200-rpm SATA disks, it might be better to avoid placing them all in the same virtual volume. The slow array will take longer to write data and will keep the data from being synchronized for longer periods of time, which means it must be stored in memory. However, most VSAs allow administrators to set up multiple volumes and specify which servers and disks are included in each volume.

4. Test live migration.

IT managers can use the shared storage of a VSA to migrate virtual machines from one server to another for failover. But during live migration, the data is stored in two virtual partitions and kept synchronized at all times so that if one virtual machine or server fails, the VMs can be restarted on the backup server. The amount of data required to keep the two stores synchronized will vary with the amount of disk I/O in all the VMs involved. If the connections between servers aren't fast enough to keep up with the disk I/O, migration will fail because the two copies will not be properly synced. To avoid this problem, ensure beforehand that the VSA can keep up with the amount of data needed to synchronize the VMs.

5. Look at the requirements for the VSA itself.

Most VSAs will not require a great amount of CPU or memory — one CPU core is generally enough, with 3 to 4 gigabytes of RAM. The system will probably require a separate virtual switch and a dedicated Ethernet connection would be prudent.

VSAs allow the administrator to tap the unused capacity of disks in multiple physical servers and can provide many of the same benefits offered by a separate SAN. Because of limitations in connection speeds as well as administrative overhead, these virtual systems aren't a substitute for high-performance SANs, but can provide inexpensive storage for many applications.

Sign up for our e-newsletter
Related Article
The Difference Between Aggregating Virtualization and Functional Virtualization
Not all virtualization technologies are created equal. Learn what differentiates these two forms of virtualization.
About the Author