Cotlerdesign.com  
  Cotlerdesign.com  
   
 
 
 

Designing for the new data center

Designing for the new data centerFour experts contrast new data center approaches against traditional methods for solving tough business problems.

Sure, the new data center is all about geographically dispersed resources pooled to work as a single entity But how would a new data center network design differ from a traditional approach to today's thorniest business problems? That's the challenge we gave to four systems integrators who specialize in new data center technologies.

While the businesses we described were fictitious, the integrators' solutions had to be based on actual work they've done for users. As it turns out, all the designers now look at a company's infrastructure as a virtualized entity built on logical components - not as a series of hardware, software and services. That perspective makes them see a server as a processing peripheral, an application as modular bits of code that can be executed on far-flung servers or an instant message as a piece of intellectual property That new view becomes the basis for creative next-generation solutions.

Lee Abrahamson, practice director of SAN solutions and advanced technology, CNT

Business problem: A shipping company relies heavily on its e-commerce site - so much so that it loses money every second the site is down. A disaster that takes the site down for hours to days would mean thousands - potentially millions - of dollars in lost revenue and perhaps permanent customer attrition.

Traditional approach: Create recovery points in 15-minute intervals on inexpensive but reliable tape and store copies with an off-site disaster-recovery vendor. If a disaster occurs, contact the off-site vendor. However, if this off-site vendor supports too many businesses affected, it might need days to restore systems. Some disaster-recovery sites can handle only a small percentage of their customers simultaneously.

Tape also might prove to be a bottleneck. A busy e-commerce database easily could fill 100 or more "tape mounts" in a 24-hour period (meaning the number of tapes used to back up a daily base copy of the entire database plus bundles of transactions in 15-minute intervals). Restoring many tapes would take hours, perhaps even days. Plus, for tapes stored off-site, the company also must factor in the time - likely another day - to locate and ship the tapes.

New data center approach: Use information life-cycle management (ILM) to put data on the most cost-effective media that also has the performance attributes needed to complete the storage job. Use expensive disk, mid-priced disk, less-expensive disk and tape.

One way to executive ILM is storage virtualization, which inserts storage intelligence between the host and its storage. Most virtualization engines reside "in-band" on the storage network and decouple the storage management functions (mirroring and snapshots) from the storage itself. This lets users build heterogeneous storage environments (multiple tiers and vendors). Such virtualization engines may be appliances but, eventually, they simply will be embedded in a storage network node (like a core switch).

Virtualization presents a logical view to the server. In what I call "logical-land," certain physical storage limitations (size allocations, expansions) can be removed. Storage functions such as mirroring and snapshots can be applied to any storage type by any vendor. The downside is a single point of failure. Without the engine, servers can't read the storage, even if they are reconnected directly to it.

Fortunately another option is available: storage-area network (SAN)-based replication of the physical data rather than the logical data. I call this "virtualization lite." This form of virtualization resides in the data path, but presents the physical disk as-is to the server. It does not require logical re-mapping of the disk. This version sacrifices some features of full virtualization but retains key features such as heterogeneous mirroring and snapshots. And if the engine is removed, servers can operate directly connected to the disk.

So when looking to save that e-commerce site from a time-consuming recovery, the first change is to replace tape with Tier 3 storage (Serial Advanced Technology Attachment) as the primary recovery mechanism. Tape would be used for archiving. Virtualization lite lets us take highly efficient snapshots (base copy plus block-level changes) of our Tier 1 storage (expensive) and put it on Tier 3 storage (inexpensive), and to mix and match vendors between tiers. By retaining snapshots on disk, a local recovery even of a large database is a matter of rolling back to a previous online snapshot, which generally takes minutes - or a few hours for an exceptionally large database. Lastly, the database is archived to tape weekly or so for long-term retention.

One bonus of virtualization lite is more affordable inhouse disaster recovery Most companies already have multiple data centers and network connectivity between them. We can tap the heterogeneous mirroring capabilities of our virtualization lite engine to move data asynchronously over lower bandwidth links to another location. This is less costly than moving physical batches of tape offsite daily We also minimize costs by using Tier 2 or 3 storage as the replication target.

Once the primary site is ready to come back online, the virtualization lite engine at the remote location can mirror the database back to the primary site, letting the primary servers take control with minimal downtime.

Electronic message archiving: Managing out-of-control e-mail growth

Jim Geis, director of system solutions and services, Forsythe Solutions Group

Business problem: As part of a distributed IT operation, an entertainment company placed e-mail servers at each of its 15 offices. With e-mail and instant messaging (IM) use rapidly growing, predicting storage requirements for electronic messages had become difficult. Distributed operations were complicating capacity planning. And planning was about to get worse. Because of compliance regulations and the corporate worlds increasingly litigious nature, the legal department mandated that IT keep a permanent record of all messages for at least seven years.

Traditional approach: Add e-mail servers to nightly back-up processes to address legal's mandate and then manage server space by reducing message stores on e-mail servers. However, this has several drawbacks. Even if Post Office Protocol is not used - so messages aren't downloaded automatically to the client and deleted from the server - users remain free to manage their own e-mail. They can delete messages stored on the server at will and exchange information with whomever they wish (although administrators might filter out certain domains). A disgruntled employee could leak messages or wipe an in-box clean of all messages. If users delete messages from the main server before a nightly backup, those messages would be gone for good. And, for users that never delete their messages, system administrators must ask them to do so when the servers run out of space - unless the administrators automatically expunge messages older than a specified date. Should IT need to locate messages on a specified subject from an archival tape - perhaps for use in legal proceedings - finding messages based on content would be an arduous task, taking weeks to months (subjecting the company to court-imposed penalties for untimely compliance), and that's presuming the message was saved to begin with (with most IMs never saved at all).

New data Center approach: Treat every electronic message sent or received as a potential evidentiary fact. Know the location of electronic information, who sees it, how long to keep it and when to delete it. Develop access, creation, deletion and retention guidelines. Create a plan that coordinates the physical management of e-mail storage with logical electronic message content management, including IMs. Use an electronic message archiving infrastructure as the technology that lets you execute these guidelines and plans.

With a good electronic message archiving infrastructure, all messages would be processed and stored centrally, accessible not only by the user, but also possibly by a subset of people from various business departments - legal and managerial, to name two. Two types of servers are required - one for processing messages and another for managing archival functions. Archival management includes indexing and searching messages based on various selection criteria, from dates and sender to content. The business gets the bonus of knowledge management - the ability to mine message vaults for useful business information.