Storage Performance – SAN versus HyperConverged

I had a friend propose an interesting hypothesis recently… and see my response below.


 

Proposal

It has been suggested that a Fibre Channel based Storage Area Network (SAN) enjoys its peak performance on the day it is first installed…. Then the performance goes South (degrades) successively from there ūü§ď

As either servers or just more storage are added to the SAN, the IOPS for every hypervisor host takes a hit…

Now looking at most hyperconverged implementations flash based read write caches allow a near linear or even better scale. The virtual storage controllers on hyperconverged solutions help out here as well.


 

Response

My response is based on my experience migrating physical workloads to VMware hypervisors with both SAN and NAS storage since 2005.

TL;DR

My short answer is that a properly designed AND maintained¬†virtual infrastructure will perform much better than a “simple” hyper-converged solution any day. You will also potentially¬†benefit from increased security, flexibility, and scalability. ¬†However, be prepared to pay lots more. Not just for the initial solution deployment but also throughout the lifetime of the solution.

Longer answer

As with most things in IT it’s easy to say “well, it depends on your situation”. ¬†But we can make some generalizations. Read to the end for my real world example from a VDI project.

With the advent of public cloud providers enterprise customers expect legacy virtual solutions to be offered as subscription based OpEx where you only pay for what you need on a monthly or annual basis. Traditionally enterprises have had to anticipate the hardware capacity needed when deploying a new server compute infrastructure. The Capital Expense costs are depreciated over time so servers are expected to last 3 to 5 years or more.

History

Let’s first look at a short history of enterprise computing and virtualization to set the stage.

  1. The first computers used by most organizations were expensive “mainframes” in a centralized server room with remote terminals used to access the data and interact with the application. Less expensive “mini” computers continued this centralized server based trend. These systems were used for data processing and generation of reports so once the information was input, processed, and printed out copies could be made for safe keeping off site.
  2. With the introduction of ethernet and the personal computer “client server” applications became popular. As the criticality of the data increased features like RAID and tape backup became more important.
  3. SCSI technology enabled multiple machines to share a central storage system and although super complicated and difficult to setup became the defacto solution for critical databases.
  4. As the volume of data and the demands for scale increased Fibre Channel based Storage Area Networks became popular and allowed a “pay as you grow” offering.
  5. Virtualization solutions initially used much of the same hardware technology as before. The products first on the market had Hardware Compatibility Lists that were very short as the vendors had to custom create specific drivers to achieve desired performance and reliability.
  6. Compatibility and performance for hypervisors have improved so now most organizations have a “Virtual First” policy for new workloads.
  7. Public Cloud providers are starting to eat into the CapEx market as Enterprise applications and security processes are made compatible.

Discussion

So – with that out of the way – let’s compare SAN storage options to hyper converged.

Looking at an organization support¬†10s or 100s of applications to provide data processing services to their internal employees. With a 4 year capital expense policy they’d need to replace a quarter of their server capacity each year. ¬†Accounting for growth, assuming the business is going well, they will be purchasing more powerful machines which run more efficiently, consuming less power overall with increased RAM and CPU. Of course the storage needs will increase too. These are the CapEx costs. ¬†There are also OpEx to consider such as software licensing, Power, Space, Cooling, and probably the number one cost of most IT shops is the people. Reducing manpower expertise can save money to be used for other things.

If you have the money (budget) and want to “have the best” when it comes to performance, security, and flexibility then a traditional SAN based virtual infrastructure customized exactly to the needs of your applications will be the best. You can configure storage volumes as desired with SSD, Flash, and SATA spinning disk as well as connectivity using Ethernet 10g, 40g or Fibre Channel 4g,8g,16g based on needs. Fibre Channel based SAN also has a unique multi-path fault tolerant capabilities and can scale out easily with load sharing over many links.

Going with NAS or Hyper-converged infrastructure will typically compromise on customization of capabilities with a fewer “flavors” available for performance, security, and fault tolerance. The benefit is that you can start small with as few as two machines and then scale out¬†to the number of nodes supported in the cluster software provided by your vendor.

Example

In a real world example from a few years ago a team was brought in to provide a replacement virtual infrastructure to be used for hosting 10,000 windows 7 desktop machines for workers in the public sector. They had a 5 million dollar equipment budget (that’s about $500 per user) to be used for the storage, network, and compute. The bid was won by a joint proposal from 3 vendors representing Storage, Compute, Network, and Hypervisor. There were 8,000 users that had been using old windows desktops which¬†were showing signs of wear. Desktop support was provided by a third party and the costs to maintain¬†these desktops, some as old as 10 years, was increasing each year. Common complaints were application crashes, inability to print, over 30 minutes to reboot or power on, and failed system updates. Moving to a Hosted Virtual Desktop (HVD) would have many benefits and offer a quick Return On Investment (ROI). ¬†The storage needs for this new environment were easy to calculate based on some simple application suite tests and some math to extrapolate out. Once we had determined the capacity and performance Input Output Operations per Second (IOPS) requirements we could plan out the type and model of storage needed. For a windows desktop we split the disk storage needs up into three parts:

  1. Machine OS data – read only optimized – copied from golden master image
  2. User Data – random read write – needs to be backed up
  3. Scratch data / swap – fast but not stored or backed up

A centralized NAS for shared data was also setup but that’s outside the scope of this environment and maintained by another group.

We calculated both read and write IOPS per VM for each partition (see above) and ¬†then divided by 3 based on a 33% concurrency expectation. These numbers were derived during an extended pilot test with a group of 100 “typical” users.

You can see how a SAN makes a difference when it comes to a design like this. The whole point is to set expectations and control long term operational costs for a large user population using an outsourced third party support service.

Conclusion

Whether you invest in a SAN based or Hyper-converged storage infrastructure you are choosing a design pattern that you are betting will last you through the life of your capital equipment life cycle.

A hyper-converged infrastructure will provide a cookie cutter solution that meets most of your needs, is simpler to operate and expand, and will save you money over time.

With a component based SAN as part of your custom built virtual infrastructure you have the option to add local DAS (direct attached storage) and NAS (network attached storage) to grow and adapt to the changing needs of your applications and customers. But be prepared to spend the extra money on highly experienced people to operate the environment or suffer the consequences if something goes wrong in the design.

Reference

For reference the following example EMC SAN solution is provided.

EMC VMAX Enterprise Storage Array:

  • US $550,000.00 + $50 shipping
  • EMC VMAX 20K
  • (4) 128GB VMAX 20K Engines with D@RE (data at rest encryption)
  • In one (1) storage bay
  • all FC FE director cards
  • (4) EMC Storage Bays, each with 16 3.5‚ÄĚ DAE‚Äôs (30 drive loop lengths)
  • (690)15K 600GB FC Drives
  • (50) 2000GB 7K SATA Drives
  • (42) 200GB EFD
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s