Evaluating Storage System Security

Storing digital data successfully requires a balance of availability, cost, performance and reliability. With the emergence of low-power, petabyte-scale archival storage and flash-based systems, it is getting increasingly difficult to quantify performance, reliability and space-efficiency trade-offs, especially when coupled with storage-security factors. Storage performance is measured by latency, throughput (bandwidth) and IOPS, with throughput typically presented as overall sustained (long) and peak (short) performance transfer rates, and has a wide variety of non-uniform and unique measurement views when storage security is employed.

Although much work has been done on defining, testing and implementing mechanisms to safeguard data storage in long-term archival storage systems, data security verification in our cloud-based, mobile-driven, virtual containerized software-defined remote storage world, remains a unique and ongoing challenge.

Data security can be ensured in a variety of ways depending on the level of security desired, performance and the level of tolerance of user-inconvenience. Most storage systems rely on encrypting data over the wire or by on-disk data encryption, typically using pre-computed checksums and secure hashes, but with no standardized parameters or protocol for comparison between network or on-disk performance and integrity while in actual use.

In today’s multi-tenant virtualized container storage environments, containers depend on a different approach to virtualization, ie. they are not the hardware of things and how a guest O/S runs on top of all that (cpu/memory/network/storage), as containerization separates users and processes from each other. Multi-tenant security is especially important with the heavy reliance on 24xforever mobile data access from containerized cloud storage, where the top-10 security issues identified in 2015 by OWASP (www.owasp.org) were:

  • Insecure data storage;
  • Weak server-side controls;
  • Insufficient transport layer protection;
  • Client-side injection;
  • Poor authorization & authentication
  • Improper session handling
  • Security decisions via un-trusted inputs
  • Side-channel data leakage
  • Broken cryptography
  • Sensitive information disclosure

Docker, one of the most prevalent deployed container technologies in use today, have just recently addressed container user-security concerns  by separating daily container operation privileges from root privileges on the server host, thus minimizing risk of cross-tenant user namespace and root server/data access.

The Center for Internet Security recently released a series of internet security benchmarks (https://benchmarks.cisecurity.org) resources that, although an independent authority and not a standards body, are based on recommended industry-accepted FISMA, PCI, HIPAA and other system hardening standards to help in mitigating security risk for virtualized container storage infrastructure implementations. Although there are a number of new technology products being introduced specifically focused on unique virtual container data security, what does ‘secure’ really mean in the container-context, ie. secure container access, valid container data, native security of application(s) in the container, etc. ?  Most container data volumes today are tied to a specific virtual server, and if the container fails or is moved from that one server to another, the connection to the data volume is lost (no persistent storage), regardless of employed security parameters. For virtual container data to be truly secure, a fully distributed, reliable, secure read/write container file system must be employed to ensure secure, resilient cloud deployments. Ideally, this can be achieved with a container-native cloud deployment on bare-metal, without the use of virtual machines, making the container’s data lifecycle and application scalability independent of the container’s host, while minimizing the future cost and complexity of provision and management of virtual machine server hosts. That coupled with a hardware-secured, write-once data storage device tier, can truly ensure long-term data storage security irrespective of use or lack of encryption use. Additionally and most importantly, cloud data storage encryption keys, although defined within the facets of the SNIA-based Cloud Data Management Interface (CDMI) key management interoperability protocol (KMIP) proposed standard, requires better wide-spread adoption, as most crypto key management is either at the specific storage device level with a single point of key-access failure or as a Cloud provider-managed option today…Lose the key(s), lose the data, no matter how securely managed or replicated!

Clients acting in the role of using a data storage interface

Some data storage security basics:

  • Physical security is essential.
  • Develop internal storage security standards (authentication/authorization/access control methods, configuration templates, encryption req’s., security architecture, zoning, etc.).
  • Document, maintain and enforce security policies that cover availability, confidentiality and integrity for storage-specific areas.
  • Ensure basic access controls are in place to determine your policies; change insecure access permissions.
  • Unload unnecessary/not-required storage services related to NFS (mountd, statd, and lockd).
  • Limit and control network-based permissions for network volumes and shares.
  • Ensure proper authentication and credential verification is taking place at one or more layers above storage devices (within the host operating system, applications and databases).
  • Operating system, application and database-centric storage safeguards are inadequate. Consider vendor-specific and/or 3rd.party storage security add-ons.
  • Ensure audit logging is taking place for storage security accountability.
  • Perform semi-annual information audits of physical location inventory and critical information assets.
  • Separate storage administration and maintenance accounts with strong passwords for both accountability and to minimize potential compromised-account damage.
  • Encrypting data in transit helps, but should not be relied on exclusively.
  • Carefully consider software-based storage encryption solutions for critical systems (key mgt.).
  • Evaluate and consider hardware-based drive encryption on the client side.
  • Carefully select a unified encryption key management platform that includes centralized key lifecycle management.
  • Deploy Boolean-based file/stream access control expressions (ACE’s) in container environments to simplify permission granting to users/groups across data files/directories while providing an additional data protection level in multi-tenant environments.
  • Evaluate OASIS and XACML policy-based schemas for secure access control.
  • Evaluate and consider write-once data storage technology for long-term archival storage tiers.

Related posts: