Mar 27 2008
Data Center

Computing Fabrics

Data center virtualization treats server, storage and network resources as a single pool.

Storage and processing virtualization have changed the way data centers operate. The City of Dublin, Ohio, for instance, virtualized its storage eight years ago and has recently used VMware to consolidate 25 servers down to two that run with 16 processor cores.

“We have a pretty small technical staff and a fairly large organization to support with a lot of different business processes,” says Network Operations Manager Bob Schaber. “Virtualization makes this much better from a personnel standpoint, and we can recover much easier from any type of failure.”

The next step is to virtualize the entire data center infrastructure — processing, storage and networking — into a single computing fabric. Last fall, Hewlett-Packard and IBM introduced software to manage the data center as a combined resource pool. Switch manufacturers have also developed their own strategies and product lines to support this concept. Brocade Communications Systems of San Jose, Calif., calls its approach the Data Center Fabric (DCF). Cisco Systems, also of San Jose, has the VFrame Data Center.

“With these new data center fabric environments, the physical assets are managed very consistently,” says Richard Villars, vice president of storage systems for consultancy IDC, in Framingham, Mass. “What I am doing is just moving workloads between different compute or storage resources on the fly, without disrupting the end user.”

Villars says that these fabric systems are a natural extension of the technological developments that have given us SANs and server virtualization. But what has brought it to the forefront over the past six months is growing awareness of how companies, such as Google and Yahoo, are using pooled computing resources. As this awareness has grown, so has the demand to adapt this approach to other applications.

Available Options

Both Brocade’s and Cisco’s approaches consist of a mix of hardware and software. DCF is an application-oriented architecture that supports multiprotocol connectivity and policy-based automation and provides continuous data protection, data migration, server and storage virtualization, and data encryption. Cisco’s VFrame DC is an appliance with a Java-based application to provision and reuse infrastructure components. Krish Ramakrishnan, vice president and general manager for Cisco’s server virtualization unit, says VFrame is useful to government data centers that centrally host applications for a variety of agencies. As different groups’ computational needs increase or decrease, VFrame allows the customer, based on predetermined policies, to reconfigure and add new users or applications on the fly. The resources can be reprovisioned in about a minute.

“If an official announces a new policy or service, IT administrators can easily monitor traffic, coordinate with load balancers and storage, and move the appropriate number of servers into that environment,” says Ramakrishnan. “You can never anticipate how the traffic flow will happen, but need to cope with it when it does.”

He says that a large county is currently testing VFrame for just such an application, but they have not put it into production yet.

Although the initial products are on the market, Villars says these fabrics are still bleeding edge. Duncan Bond, data network supervisor for the state of Maine, concurs. He suspects that it will be in his future, but may be several years before his organization adopts it.

Nevertheless, Villars recommends that agencies in the business of coordinating content or providing information to individuals or businesses start looking at using fabric computing for new applications, even if they don’t want to virtualize the entire data center. “It will make them much more responsive to their customers,” he says.

How Computing Fabrics Work

Cisco VFrame Data Center enables the coordinated provisioning and reuse of physical compute, storage and network resources from shared pools.

  1. Infrastructure service templates describe the rules by which data center resources host applications.
  2. The computing fabric discovers available resources and groups them into shared pools based on attributes such as performance, capacity and availability.
  3. The computing fabric orchestrates the provisioning of a service network from discovered shared pools of server, storage and network resources.
  4. The computing fabric automates common operating tasks such as failover, policy-based resource optimization and service maintenance.
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT