Jul 30 2009

Adding Sizzle

Fast Fibre Channel switches deliver big pipes to virtualized SAN environments.

Organizations seeking oomph for their storage area networks should consider 8 gigabit-per-second Fibre Channel.

Fibre Channel remains the gold standard for SANs, thanks to its reliability and fault tolerance. Consolidation, virtualization, blade servers and multicore CPUs are all driving demand for the 8Gbps version of the technology, notes Tom Hammond-Doel, vice chairman of the Fibre Channel Industry Association.

For example, if a data center consolidates 20 servers, it boosts the need for aggregate bandwidth and input/output operations per second (IOPS). "Let's say we put eight virtual machines on one physical machine. With 8G Fibre Channel, each has essentially a 1Gbps pipe to the outside world," Hammond-Doel says. "We're finding that 8Gbps Fibre Channel is a perfect play into the virtualization market."

Late last year, 8Gbps switches arrived on the scene from manufacturers such as Brocade, Cisco Systems, Hewlett-Packard and QLogic; Brocade, Emulex and QLogic offer 8Gbps host bus adapters (HBAs).

Greg Schulz, founder and senior analyst for StorageIO and author of The Green and Virtual Data Center, says early deployments can be found in larger data centers and large-scale computing environments.

Not all states or localities need the speed of 8Gbps Fibre Channel yet. But video-streaming applications such as surveillance and video editing can benefit, as can environments in need of overhead capacity, says Richard Rose, product manager for Cisco's Data Center Switching Technology Group.

Plus, there's a piece missing for many data centers, he says: 8Gbps Fibre Channel disk and tape arrays. "As soon as there are 8Gbps targets, backups will benefit," Rose says.

Cisco's Rose recommends planning for the future by choosing a Fibre Channel switch that's 8Gbps-capable, then picking and choosing a mix of pluggable optics. 8Gbps optics currently run three to four times the price of 4Gbps. "You're paying a 20 to 30 percent premium at a solution level," he says.

With time, though, the price premium will drop. Schulz notes Hewlett-Packard offers a four-server, 8Gbps SAN starter kit for about $8,000, which is less expensive per port than 10Gbps Ethernet.

Schulz expects that agencies will start jumping on 8Gbps Fibre Channel this year, with the big push happening in 2010 and 2011. After that, "in three or four years, 16Gbps Fibre Channel may be out by then, or users may make the transition over to Fibre over Ethernet (see below).

Blended Fibre

An evolving storage specification has the potential to further federal green initiatives.

The Fibre Channel over Ethernet (FCoE) specification joins network and storage area network traffic. Because the protocol bypasses the TCP/IP stack, it's not routable; it's geared for connecting servers in the data center.

"It's a true innovation that is extending the life of Fibre Channel," says Tom Hammond-Doel, vice chairman of the Fibre Channel Industry Association. Pre-standard FCoE products have already appeared. Industry watchers expect the T11 Technical Committee to finalize the specification this year and the market to boom in 2010.

The technology also provides cost reduction and environmental benefits. Richard Rose, product manager for Cisco System's Data Center Switching Technology Group, says it's common to see as many as six adapters coming out of a server for various functions. With FCoE, agencies can consolidate all those down to two converged network adapters that contain Fibre Channel and Ethernet functionality. Factor in reduction of network interface cards and switches, and it can add up to significant energy savings.

By switching to FCoE, what formerly took at least four interfaces per server (two NICs and two HBAs) would take two FCoE adapters per server. Based on QLogic research, the reduced power consumption stacks up. Consider a rack of 20 servers that rely on a traditional Fibre Channel HBA topology: If each HBA consumes about 12.5 watts, eliminating two interface cards per server would equate to a 500-watt per-rack reduction.

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT