Software Defined Storage (SDS) vs Traditional SAN

Igor Nemy
3 min readFeb 14, 2015
SDS vs traditional SAN

This week I had a couple hours to analyze about almost everything what I knew about the Software Defined Storage (SDS) paradigm. And as a result of these reflections I have defined next main advantages of the SDS against old traditional SAN at the current moment:

  1. Deep SLA Atomic Granularity. Since SDS brings a new level of completely separated logical storage service from its hardware, we can achieve deeper granularity of stored elements for which we enable such features as rebalancing, migration, replication, deduplication, snapshotting and so on. All these tasks might be simply configured on per unit basis to achieve maximum SLA uniqueness. For example, instead of whole LUN or Volume aggregation tuning, we can create unique SLA policy for particular virtual disk. This also empowers us to make data portable, eliminating planned and unplanned downtime on per VM or per virtual disk basis across different physical disks, nodes, racks, rooms and so on.
  2. Pay As You Grow model. Because SDS does not have any central elements, we will not have central bottlenecks or single point of failures and can have theoretically unlimited horizontal scalability range and vertical range is limited only to servers hardware limitations here. And since we can scale (or modernize) our storage cluster on per drive basis, we can achieve cool approach of very evenly investment during building our storage platform. This is something like Pay As You Grow model ☺
  3. Very low OPEX. In traditional old paradigm, you would need to plan the lifecycle of the future storage system. During planning stage, you need to understand how much disk space and how much I/O commands it should serve and at the same time taking into account how it can operate, and how it scales. You also should to consider whether it requires planned downtimes to implement new functionalities or disabling some extra ones. Or how to implement current or future disaster replication and so on. After understanding future storage architectures of this life cycle, you must invest in building SAN or modify your existing one. Thus, if you chose to present expanded capacity to a new compute server, you will require not only adding new disks to storage and creating new LUN, but there will be necessity to perform separate fabric zoning and multipathing driver installation.
    In the new software defined paradigm you can find completely new operations approach — no RAID related calculation, no SAN setup, no zones creation, no special cabling or special switch hardware configuration. For SDS the disks, nodes, and rooms, are all suitable duplication locations. You can easily add new disks to nodes, or node to rack, or rack of nodes to a system without any downtime. Rebalancing, migration, new replication and so on can be just programmed because this storage is already the program.
  4. Investment Protection. In SDS, as in any other Software Defined things, subject to obsolete is mostly the software layer. Meanwhile used hardware is simple commodity elements such as disk drives, memory, CPU and so on, all “magic” happens inside of the logical subsystems, implemented completely in the software layer. Thus, to implement any new feature you need only to update the software layer. This is great protection of investments.

If you, my dear reader, see any other obvious distinctions, please leave comment here or over email, I will very appreciate this.

--

--