NAND Flash Scarcity – the Roots and Effects of the Issue

Those who have kept an eye on the SSD market over the past 12 months will have noticed a rise in prices due to NAND flash shortages. NAND flash memory is the technology behind power-efficient, solid state drives (SSDs) and other storage memory found in personal computers and mobile devices.

This shortage is impacting those of us in the electronics industry in a variety of ways. Not only are prices rising, but more companies are now trying to fill the void by producing more SSD, while others are hard at work to create alternatives.

The shortage we are experiencing is due to several factors, including:

  • A difficult transition from 2D to denser 3D technology on the manufacturing side
  • Continued high demand for flash for use in smartphones, in particular, the increased storage offered by iPhone 7s
  • Heightened demand from manufacturers desiring flash storage for datacenter hardware
  • Sustained demand for PCs and notebooks, with average flash adoption in notebooks expected to exceed 30%
  • Troubles on the manufacturing side by the two of the largest producers

That final point merits a few more words. One of the largest factors that is undoubtedly contributing to the NAND shortage is Toshiba’s current financial troubles. The second largest supplier of flash memory in the global market and first company to begin producing NAND flash memory, Toshiba has struggled with the production of 3D NAND memory. Toshiba’s troubles are not, however, completely on the manufacturing side. The electronics giant recently acquired a company to build nuclear power plants in the United States—a woeful project that has resulted in accounting scandals, legal actions, and billions of dollars in debt. The upshot: Toshiba is now selling off its semiconductor/NAND memory division. We assume that bidders will include Micron Technology, SK Hynix, Broadcom Ltd, and Western Digital.

Another, more highly publicized issue has to do with the largest supplier of flash memory in the global market: Samsung. The recall of Samsung’s Galaxy Note 7 smartphones a few months ago have been a factor in the global scarcity, as scores of devices had to be returned and replaced in the market. Along with each of those returned devices was a flash memory unit taken off the market (at least temporarily).

The net effect of this shortage is that prices have increased to PC manufacturers. As SSD performance is now reaching mainstream consumer awareness, including these drives in personal laptops is becoming more and more expected. Nevertheless, SSDs are not usually within the same capacity that most standard hard disk drives (HDD) are sold with. Laptops sold with SSDs are typically in the range of 128 to 256 GB, while a laptop with an HDD is commonly much higher–anywhere between 500GB to 1 TB. But the price differential tends to be quite significant. That will likely remain consistent while scarcity exists.

Nevertheless, some manufacturers are optimistic. Samsung is now expected to begin operating a new plant in Pyeongtaek in July to further expand its 3D NAND production capacity. Micron will start producing 64-layer 3D NAND chips in the second quarter, with mass shipments becoming ready for the second half of the year. The company promises “meaningful output” by the end of their fiscal year in December.

We won’t hold our breath, but while manufacturers continue to scramble, and alternative storage technologies emerge, we’ll keep you updated. Keep your eye on this blog for further developments.

Related posts:

Use Case: Body Worn Camera Manufacturer requires a Data Storage Solution that solves for both Speed and Reliability

DIGISTOR was approached by a large, successful manufacturer of body worn camera equipment in 2013 concerning the launch of a new camera targeting the law enforcement community.

The camera had a beautiful industrial design, and was loaded with several new features including very high resolution video.

The software developed integrated seamlessly with a full chain of custody solution, ensuring that the digital evidence would be admissible in a court of law.

The Challenge

But, there was one problem. The microSD card originally selected for the camera continually became corrupted, thus losing valuable evidence and making this new body camera all but useless.

Although hundreds of thousands of dollars had been invested in the camera’s hardware and software development, very little investigation was done into the data storage solution the video would ultimately be written too.

The manufacturer turned to DIGISTOR for help.

The Solution

DIGISTOR worked with the manufacturer’s engineering team to understand the full picture, and identified two critical application requirements:

  • Speed. The customer had a critical high speed write requirement that the SD card had to achieve under all circumstances.
  • Reliability. It was crucial that not only the video was protected from corruption, but that the manufacturer’s customers could have a firm understanding of the life expectancy of each card.

Early on in the design process, the manufacturer focused heavily on speed as the number one requirement.

Working closely with the DIGISTOR firmware engineers, the manufacturer was able to achieve the performance needed for video capture of high resolution video.

Moving on to the reliability requirements, the engineers quickly realized the two bigger issues were the lack of consistency on longevity of the microSD cards, and an unacceptable failure rate.

The DIGISTOR engineer’s test results showed that corrupted tables were locking up the SD cards and not allowing for data recovery of potentially crucial video evidence. The engineers took the following approach to support the manufacture’s identification of the best microSD card solution:

  • DIGISTOR provided an application analysis card which the manufacturer ran in a real life application scenario for a 2-week period.
  • DIGISTOR analyzed the data captured to determine how the application was accessing the SD cards, which also showed the write/erase counts.
  • The data analysis also showed incompatible access patterns within the customer software which could be altered to help overall reliability.
  • DIGISTOR was able to perform a Failure Analysis (FA) on the failing cards that showed how the manufacturer’s application was writing to the SD card and where the issues were occurring.

The Results

By having a full understanding of how the video application accessed the SD card and also how the software was over-stressing memory cells due to unevenness of the write/erase cycles caused by incompatible access patterns in the application itself, the DIGISTOR engineering team found that the standard wear-leveling algorithm was not activating properly and causing corruption within the SD card.

DIGISTOR was able to modify standard firmware to meet the requirement of the video application.

DIGISTOR recommended the manufacturer make changes to the software, which improved the overall performance of the SD card and BWC application.The manufacturer was able to achieve both the performance and reliability needed for a successful new camera launch.

Today, the manufacturer continues to grow share in the body worn camera market and achieve a solid ROI on their secure data platform.

Related posts:

Evaluating Storage System Security

Storing digital data successfully requires a balance of availability, cost, performance and reliability. With the emergence of low-power, petabyte-scale archival storage and flash-based systems, it is getting increasingly difficult to quantify performance, reliability and space-efficiency trade-offs, especially when coupled with storage-security factors. Storage performance is measured by latency, throughput (bandwidth) and IOPS, with throughput typically presented as overall sustained (long) and peak (short) performance transfer rates, and has a wide variety of non-uniform and unique measurement views when storage security is employed.

Although much work has been done on defining, testing and implementing mechanisms to safeguard data storage in long-term archival storage systems, data security verification in our cloud-based, mobile-driven, virtual containerized software-defined remote storage world, remains a unique and ongoing challenge.

Data security can be ensured in a variety of ways depending on the level of security desired, performance and the level of tolerance of user-inconvenience. Most storage systems rely on encrypting data over the wire or by on-disk data encryption, typically using pre-computed checksums and secure hashes, but with no standardized parameters or protocol for comparison between network or on-disk performance and integrity while in actual use.

In today’s multi-tenant virtualized container storage environments, containers depend on a different approach to virtualization, ie. they are not the hardware of things and how a guest O/S runs on top of all that (cpu/memory/network/storage), as containerization separates users and processes from each other. Multi-tenant security is especially important with the heavy reliance on 24xforever mobile data access from containerized cloud storage, where the top-10 security issues identified in 2015 by OWASP (www.owasp.org) were:

  • Insecure data storage;
  • Weak server-side controls;
  • Insufficient transport layer protection;
  • Client-side injection;
  • Poor authorization & authentication
  • Improper session handling
  • Security decisions via un-trusted inputs
  • Side-channel data leakage
  • Broken cryptography
  • Sensitive information disclosure

Docker, one of the most prevalent deployed container technologies in use today, have just recently addressed container user-security concerns  by separating daily container operation privileges from root privileges on the server host, thus minimizing risk of cross-tenant user namespace and root server/data access.

The Center for Internet Security recently released a series of internet security benchmarks (https://benchmarks.cisecurity.org) resources that, although an independent authority and not a standards body, are based on recommended industry-accepted FISMA, PCI, HIPAA and other system hardening standards to help in mitigating security risk for virtualized container storage infrastructure implementations. Although there are a number of new technology products being introduced specifically focused on unique virtual container data security, what does ‘secure’ really mean in the container-context, ie. secure container access, valid container data, native security of application(s) in the container, etc. ?  Most container data volumes today are tied to a specific virtual server, and if the container fails or is moved from that one server to another, the connection to the data volume is lost (no persistent storage), regardless of employed security parameters. For virtual container data to be truly secure, a fully distributed, reliable, secure read/write container file system must be employed to ensure secure, resilient cloud deployments. Ideally, this can be achieved with a container-native cloud deployment on bare-metal, without the use of virtual machines, making the container’s data lifecycle and application scalability independent of the container’s host, while minimizing the future cost and complexity of provision and management of virtual machine server hosts. That coupled with a hardware-secured, write-once data storage device tier, can truly ensure long-term data storage security irrespective of use or lack of encryption use. Additionally and most importantly, cloud data storage encryption keys, although defined within the facets of the SNIA-based Cloud Data Management Interface (CDMI) key management interoperability protocol (KMIP) proposed standard, requires better wide-spread adoption, as most crypto key management is either at the specific storage device level with a single point of key-access failure or as a Cloud provider-managed option today…Lose the key(s), lose the data, no matter how securely managed or replicated!

Clients acting in the role of using a data storage interface

Some data storage security basics:

  • Physical security is essential.
  • Develop internal storage security standards (authentication/authorization/access control methods, configuration templates, encryption req’s., security architecture, zoning, etc.).
  • Document, maintain and enforce security policies that cover availability, confidentiality and integrity for storage-specific areas.
  • Ensure basic access controls are in place to determine your policies; change insecure access permissions.
  • Unload unnecessary/not-required storage services related to NFS (mountd, statd, and lockd).
  • Limit and control network-based permissions for network volumes and shares.
  • Ensure proper authentication and credential verification is taking place at one or more layers above storage devices (within the host operating system, applications and databases).
  • Operating system, application and database-centric storage safeguards are inadequate. Consider vendor-specific and/or 3rd.party storage security add-ons.
  • Ensure audit logging is taking place for storage security accountability.
  • Perform semi-annual information audits of physical location inventory and critical information assets.
  • Separate storage administration and maintenance accounts with strong passwords for both accountability and to minimize potential compromised-account damage.
  • Encrypting data in transit helps, but should not be relied on exclusively.
  • Carefully consider software-based storage encryption solutions for critical systems (key mgt.).
  • Evaluate and consider hardware-based drive encryption on the client side.
  • Carefully select a unified encryption key management platform that includes centralized key lifecycle management.
  • Deploy Boolean-based file/stream access control expressions (ACE’s) in container environments to simplify permission granting to users/groups across data files/directories while providing an additional data protection level in multi-tenant environments.
  • Evaluate OASIS and XACML policy-based schemas for secure access control.
  • Evaluate and consider write-once data storage technology for long-term archival storage tiers.

Related posts:

Is Hybrid Data Storage a Solution for you?

As the 2016 New Year unfolds, the demand for secure data storage will increase at every level within the IT stack. According to the 2015 Cyber Defense Report, 70% of organizations have been compromised by a successful data breach within the last 12-months. With a zero-trust data protection mantra, new pervasive data security solutions will emerge to touch applications, endpoints, networks and storage collectively. Encryption technology alone, when keys are managed by employees in both on-premise and Cloud environments, is not an adequate cyber-attack deterrent, while control over data location and redundancy are key to maintaining compliance, data privacy and security across global, heterogeneous infrastructures.

To keep up with the burgeoning big-data deluge, organizations continue to move larger workloads into unified/virtualized environments, both on-premise and cloud. Many have already successfully deployed a variety of high-performance hybrid data storage solutions into the data center landscape. In a recently released survey by ActualTech Media, many of these enterprises have begun incorporating flash-based storage in their data centers, ie. 41% use on-premise HDD only; 9% are using off-premise/Cloud only; while 50% of respondents are already using some type of on-premise flash-based storage (3% all-flash, 47% hybrid mix flash/HDD). With all the significant benefits virtualization brings to the IT infrastructure, one factor has inhibited wide-scale legacy application virtualization, and that is performance.

blog0116-02

Bandwidth, IOPS and latency, are standard storage performance metrics typically measured in milliseconds, with flash drives specs within fractions-of-a-millisecond. As data storage is usually the IT infrastructure latency bottleneck, minimizing latency is a key objective for faster I/O completions and faster transaction processing. As latency has a direct impact on VM performance in virtualized environments, an adoption of solid-state storage incorporating flash-caching hardware and software is enabling very low latencies while simultaneously helping to minimize network bandwidth bottlenecks. Flash SSD advantages include higher IOPS, reduced cooling, reduced power and lower failure rates than standard HDD. Although flash SSD storage costs are rapidly declining  and are still a 2x factor higher than HDD per terabyte (depending on TCO variables), a combined hybrid SSD/HDD automated tiered storage solution offers compelling metrics that IT professionals are finding both acceptable and in-budget. SSD-based data storage technology provides true business value by enabling faster access to information and real-time analytics. A hybrid SSD/HDD solution enables IT to balance cost and performance to meet their unique application and SLA requirements.

Which flash-based SSD solution is truly right for your environment? There are many factors to consider when comparing industrial-grade versus commercial-grade flash storage devices. Industrial-grade utilizes SLC (Single Level Cell) NAND versus commercial-grade MLC (Multi Level Cell) as the data storage medium. Based on voltage, SLC records only a single value (0-1, on-off), where MLC can store up to four values (00, 01, 10 or 11) in two-bits of data. SLC NAND has 20-30x more endurance cycles over MLC NAND, better data retention life and extreme temperature functionality.

blog0116-03

  • SLC (Single Level Cell)
    • highest performance, high cost, enterprise grade NAND
    • 90-100,000 program/erase cycles per cell (highest endurance)
    • lowest density (1 bit per cell, lower is better for endurance)
    • lower power consumption
    • faster write speeds
    • much higher cost (3x higher than MLC)
    • best fit for industrial grade devices, embedded systems, critical applications
  • eMLC (Enterprise Multi Level Cell)
    • good performance, aimed at enterprise use
    • 20-30,000 program/erase cycles per cell
    • higher density (2 bits per cell)
    • lower endurance limit than SLC, higher than MLC
    • lower cost
    • good fit for light enterprise use & high-end consumer products with more disk writes than consumer-grade MLC
  • MLC (Multi Level Cell)
    • average performance, consumer grade NAND
    • 10,000 program/erase cycles per cell
    • higher density (2 or more bits per cell)
    • lower endurance limit than SLC
    • lower cost (3x lower than SLC)
    • good fit for consumer products (..not for critical applications which require frequent updates of data)

The pros and cons of HDD compared to SSD can be paired down to a handful of variables, ie. availability, capacity, durability, encryption, environment (humidity/temperature), fragmentation, heat/BTU’s produced, mtbf/failure-rate, noise, physical form-factor, power requirements, price, shock/vibration, speed, warranty, and write-protection capabilities. Write-protection at the SSD and HDD firmware level, not just the physical data and file system level, is one of the key differentiators when comparing secure SSD/HDD storage technology solutions. There are only a small number of manufacturers offering such functionality and price is presently a premium variable of consideration. HDD are vulnerable to magnetic pulse and x-ray, making automated replication to alternate HDD, storage arrays and locations a necessity, driving up cost while still ultimately susceptible to data loss. SSD is impervious to their effects making it not only a viable tier-0 high-performance data cache solution, but potentially a new long-term active-archive storage tier solution. New ISO and NIST secure storage regulatory compliance can also be a factor when evaluating which flash-based solution will best fit your requirements, as well as DOD 5220, EU-DPD, HIPPA, FedRamp, IRIC 106, NIST FIPS/FISMA/SP-800, NSA 130-2, PCI/DSS, and many others.

For more in-depth technical comparisons and product information, give Digistor a call today at 800-816-1886 or email us at sales@digistor.com.

blog0116-04

Related posts:

Is data-centric security essential in modern storage solutions?

Data storage security has quickly become both a hot-topic and a new budget line item for CTO/CIO’s in 2015, both here in the US and around the world. An organization’s data is most often its most valued asset, while keeping it stored safely is increasingly both a commercial and legal imperative. Managing not only how data is stored but how to securely access and communicate it across a wide range of media and services is the fundamental building block of information assurance.

Regulatory compliance has driven a variety of storage practices over the years to guarantee information assurance, but one of the most sweeping new international reforms comes from the pending new EU General Data Protection Regulation (GDPR) being adopted by all 28 of the EU member states.  Substantial changes in scope to embrace globalization of cloud computing, social networks and data-breeches, brings in new levels of enforcement and heavy fines that will forever shake up EU data protection practices and privacy guidelines.

graph 01

Often the security associated with data storage systems and supporting infrastructure has been overlooked due to basic misunderstanding of the inherent risks to data storage ecosystems, leading to data risk compromised from a wide variety of events. The new NIST-sponsored Cyber-Physical Systems (CPS) framework was initiated to define key characteristics to better manage development and implementation of both the Industrial Internet and Internet of Things (IoT) physical computational and data storage components across multiple smart application domains including energy, healthcare, law enforcement, manufacturing and transportation.

The brand new ISO/IEC 27040:2015 defines data storage-centric security as application of physical, technical and administrative controls to protect storage systems and infrastructure against unauthorized disclosure, modification or destruction. These controls can be compensatory, corrective, detective, deterrent, preventive or recovery in nature.

The rapid adoption of complex software-defined storage systems (SDS), ie. the uniting of compute, networking, storage, and virtualization into a hyper-converged storage solution, became a top data center trend impacting both data security and data recovery strategies in 2015.  Although simplifying rapid provisioning, ease of implementation and redundancy, while providing significant saving in cost, power and space, data storage-centric security remains a significant gap in the SDS infrastructure.

Due to superior accessibility, capacity-on-demand, flexibility and lower overall IT costs compared to legacy on-line compute and data storage methodologies, cloud computing has quickly become a mainstay on a worldwide basis. Yet, just like traditional online compute/storage methodologies, cloud computing has its own set of unique data security issues. Mitigating risks before and throughout a cloud adoption is the number one imperative among CIO/CISO/DPO’s, as they transition applications and data to the cloud. The decision to move to the cloud depends on the sensitivity of the data/application, service-level-agreement and overall cloud security infrastructure, and ultimately does the business value offset the risks?

In a recently released 2016 Trend Micro Security report, despite the need for Data Protection Officers (DPO) or a Chief Information Security Officer (CISO), less than 50% of enterprise organizations will have one, or a budget for them, by the end of 2016. With the EU GPDR directive, coupled with the ISO 27040 data security standard mandating a significantly higher degree of data protection, a DPO/CISO job slot designated solely to ensure the integrity of data within and outside the enterprise is a wise investment. With this higher degree of awareness, legislation and technology around data storage-centric security, we will begin to see a proactive shift in the enterprise policies, practices and strategies that will bring effective protection to the storage infrastructure.

Public safety is now a concern of every commercial enterprise, municipality, school and university. High-resolution video surveillance and law enforcement body-worn cameras (BWC) are generating more long-term video storage requirements than ever before. Enterprise IT must be able to balance a budget for both cameras and a secure infrastructure that enables easy, yet secure, data access. A wide variety of new BWC, chain-of-custody, evidence management and surveillance technology solutions are blossoming as new local, state and federal budget resources are being made available in 2016.

In the first quarter of 2015, IDC reported 28.3 Exabytes (28Billion Gigabytes) of data storage capacity was shipped worldwide. The majority (23%) of this spending was on server-based storage and hyperscale (SDS architecture) cloud infrastructures, while traditional external storage arrays fell significantly and were replaced by all-flash and hybrid flash arrays (NAND/HDD). Less than .05% of all these storage products shipped employed Self-Encrypting-Drive (SED) technology, while almost 90% of all flash ar
rays shipped were SED capable. SED offer FIPS 140-2 compliant security without all the overhead of a software-based encryption schema, coupled with self-describing encryption key management capability, making it a valued component in the secure data storage infrastructure.blog graph 02

Over the next several months throughout 2016, we will delve more deeply into the practical application of specific secure storage technologies, why and how to put security directly into the physical storage technology, advantages and disadvantages between specific data storage technology, cost analysis and more. Stay Tuned..

Related posts:

Anyone can build an SD card, but not all SD cards are created equal

SD cards and microSD cards of all varieties permeate the consumer market. The price is continually dropping. They’re mass produced in huge factories in Asia. The cost of materials is so low you may start to wonder is there a way to ensure you receive a high quality SD card for your automotive, medical, body worn camera or other demanding product?

Do you really want consumer SD or microSD cards for these projects? Not if storage is a critical component of your solution. You’re dealing with narrow requirements, and you don’t have room for mistakes and equipment malfunctions. Home users of electronic equipment like SD cards can work around the occasional dud or random defective piece, but working in a professional industry is a different ballgame. What to a causal user is simply an annoyance can make the difference between success or failure, and the odds aren’t small. You need to get things right the first time.

That’s why industrial SD and microSD cards aren’t an option for those working in certain industries—they’re a necessity. All SD cards are not created equal, and comparing your industrial SD card with one bought off the Walmart shelves is like comparing a grocery store hot dog with fine Paris cuisine. The cards you buy off store shelves are produced with minimal quality control, and there’s no consistency even among cards that come from the same manufacturer. You might test a hundred and find one you really like, but there’s no guarantee that the next one you buy with the same labeling is made of the same components or has anything like the same capability. There’s no guarantee for what’ll happen when you put it under stress.

What about DIGISTOR industrial SD cards? These come with a price tag, but there’s nothing arbitrary about the price: you get what you pay for. In this case, it’s high quality MLC/SLC NAND, a wide temperature range, and Consistency spelled with a capital C. It means you don’t have to worry about the bill of materials (BOM) changing on you—and that’s huge. If you build an application around one of our SD or microSD cards you can have the confidence to know that these cards will continue working in future instances of the application: no unpleasant surprises, no need for frequent retests.

Using an industrial SD or microSD card means you get quality and you get control. You get to utilize the resources within the controller. You have control over the BOM. Nothing can change without your say-so, and you know exactly what the capabilities of your cards are in every instance.

We’d be happy to share more with you about our industrial quality SD cards—just call us for more info. DIGISTOR stands for reliability and quality, and SD cards for your industrial application is one case where those two things really do matter.

Related posts:

Bringing Your Data Home

You had your picture archive safe on Flickr, your documents on Dropbox, and a running archive of your devices on Apple’s iCloud. But when something happens to one of these services—like the two-day Dropbox downtime—you wonder if keeping your archives in cloud storage options really is the best way to go about this.  Cloud storage, no matter how respected the provider, is prone to downtime. And having your precious files suddenly disappear is not something you can take with equanimity.

How to Make a Smooth Switch From Cloud Storage to Home Data Archive Options
There’s something about having all that data available at home, in an archive of blu-ray discs or a storage drive; even if all of today’s big web companies go bankrupt, you’ve nothing to worry about. But what is the best data storage, and how to make the switch? Isn’t it too much work to be feasible? Bringing your data home may not be a half hour job, but if you do your planning first, it can be a smooth, easy run and not the huge headache that otherwise threatens.

Your first task is researching which type of storage device to use. Over the years you’ve probably accumulated more than a small amount of data, so your archive solution will need to have high capacity. You also want it to be reliable, long lasting, and you want to be able to add to it periodically. Should you buy a nice high-capacity hard disk drive, or is shelling out the bucks for a state of the art solid state drive the way to go?

The answer is—neither! Hard disk drives and solid state drives are both wonderful in their places, but for a home archive you can’t do better than go with Blu-ray discs. Unlike hard disk drives, which have lots of moving parts that are prone to breakage, a Blu-ray disc is simply a ‘page’ of written information—cold storage, if you will. Unlike solid state drives, where data could deteriorate if not accessed, the data on your Blu-ray discs can be left in a drawer for years and only read when you want what you’ve archived.  

Blu-ray discs are affordable, and they won’t take up much room. Over the years you could accumulate a collection of these discs, which can be stored conveniently in a small cabinet or magazine.

You can buy a quality external Blu-ray burner for a very reasonable price; and if you get it from us at Digistor, it’ll come with a program called Rewind™—software that will make archiving super-easy for Windows or OS X. You’ll need to buy your actual discs as well, of course—a set of 10 25GB or 50GB discs is a good place to start.

When you’ve settled on your storage device and ordered your equipment, the next thing to do is figure out how to reclaim your data from cloud storage. Some cloud storage solutions make export super-easy; from others, it is a pain, but it’s better to do it now then five years from now when you’ll have even more to deal with! If you’re looking at long download times, you may want to set up the process in the evening and let it run overnight. Make sure you have room on your computer for everything you’ll be downloading. If you don’t, setup an external hard disk for temporary storage.  You can always do it in parts, downloading one disc worth of archive material at a time.

Ready? Push that download button, and watch that data materialize out of thin air and come to solid existence on your home PC. When it’s all there, plug in your Blu-ray burner, stick a disc in and open Rewind™. Making a running archive of your data could scarcely be easier.  Choose a name for your archive, select your files, click ‘Archive It!’, and let the burn begin!

Then there is nothing left to do but organize your Blu-ray stash and file it somewhere safe and out of the way. Ideally, you’d make two identical archives, one for home, one for an alternate location. Disaster doesn’t happen often, but when it does , it’s well to be prepared.

For an extra safeguard, you can always keep your files up in your old web repository as well.  Cloud solutions are wonderful in their place; as a way to give you access to specific data from a wide variety of locations. They’re also a wonderful as a quick backup of small files in case of  natural disasters such as tornado and fire.  But for an all-purpose general archive of all your data, pictures, and information, nothing beats a well organized home-based storage center, like your new mini-cabinet of Blu-ray discs.

Related posts:

What You’re Paying for When You Buy SSD Drives Designed for Professional Video Shoots

Sure, you can get an SSD that looks as though it ought to fit your video camera for fairly cheap on eBay or off the shelf. So what makes a “professional video” SSD, well, professional?

To begin with, not all SSD drives are compatible with a high-end video camera like that from Blackmagic Design.

Some don’t fit the camera; a standard 7mm SSD can make enough difference to either keep the drive from going in at all or making it slip around unforgivably once it’s been put in place. Most newly released SSD’s don’t have cameras in mind and are designed to be as thin as possible. This extra space within the camera can cause rattling, and additional wear on the SATA connection.

Others have firmware that just doesn’t work with your camera, interrupting your workflow with inability to record, or cause you to drop frames every time you try to shoot an important video.

That’s why brands like Blackmagic supply their customers with a list of approved SSDs that have been tested and been found to work.
These are higher-end SSDs that have been rigorously tested to ensure you can depend on them—and we’re proud that our DIGISTOR Professional Video SSD series is included on that list.

But they aren’t just one of the numbers. We’ve built them to be something special.

What is it that sets DIGISTOR Professional Video SSD Drives apart?
DIGISTOR Professional Video SSDs aren’t just compatible with your Blackmagic camera; they’re made to function with the camera as if they were born together. You can take your DIGISTOR Professional Video SSD Drive straight out of the box, stick it in your camera, and expect it to work immediately. Contrast that with the formatting, reformatting, and extensive fiddling you can expect if you use another SSD drive and you’ll already start to appreciate the synergy we’ve worked for.

Additionally, here’s an SSD series that’s all about video. (In fact, it’s the first and only!)

See Also: Top 5 things cinematographers love about our Professional Video SSDs

DIGISTOR Professional Video SSDs aren’t just a possible co-opt for filming needs, they’re designed for filming in 2.5K RAW and 2.5K and 4K ProRes along with our special 1TB SSD designed for 4K RAW & ProRes (HQ) 422 format. Extensively tested for Blackmagic Cinema and Production Cameras, our SSDs do more than support the equipment preferred by professional filmmakers.  Powerful, reliable and durable, DIGISTOR Professional Video SSDs aim to make a difference in your filming experience.

Bottom line? Made-for-PC or bottom shelf SSDs may save you a few dollars up front, but there’s a chance you could be throwing the entire cost away (not to mention the price of lost work!) if one fails to meet your needs.

Related posts:

Industrial SSDs on the Frontiers of Science: Using SSDs at the International Space Station

It’s not only high-end business and heavy-duty applications that rely on the power of Industrial Solid State Drives (SSDs) these days. Besides powering most of our earthly communications and industry, Industrial SSDs are also pushing the frontiers of science beyond the limits of our atmosphere. They have become the storage medium of choice at the International Space Station, allowing reliable, high-volume data collection like never before.

Data storage in space comes with its own set of special challenges. Not only does whatever storage medium used need to be compact, taking up a minimum amount of space, it also needs to be light; as every ounce in the journey to space counts. Limitations also demand that it should have low power consumption.

Finally, any storage system used should have high reliability, and an extreme temperature operating range, the ability to function without gravity, and the ability to withstand a high dose of radiation and remain uncorrupted.

Industrial strength SSD systems are all good as far as most of those criteria go. Radiation alone is a potential problem area. Down here on earth, we’re protected from cosmic radiation by the ozone layer and our atmosphere. This covering is effective in shielding us from most debilitating radiation.  Out there in space, they’re going (figuratively) naked.

Off-the-Shelf and Into Space
NAND flash memory tends to have a vulnerability to radiation; ionizing effects have the potential to do a number on the individual cells that hold the information bytes, resulting in voltage shifts and data corruption. But NASA scientists have discovered that while some memory chips fail dramatically under radiation pressures, others have the capacity to perform reliably.

This means that high quality industrial SSDs can be used after a rigorous test-and-retest procedure in which the highest performers are selected.

That’s why the International Space Station (ISS) now has the capacity to send a large volume of data and video images down to us here; shouldering past the old limits of knowledge and understanding in a way that’s never before been possible.

And it’s only getting better. While the switch from older operating technologies to SSDs began several years ago, just last week astronaut Tom Kelly switched out the old-fashioned Columbus Video Cassette Recorders (VCRs) from the starboard end of the ISS and replaced them with new Solid State Drive recorders.

As the transition to SSDs continues, we can expect to see a much larger volume of higher quality images and information beamed down to us directly from the outer frontiers of scientific exploration.

Related posts:

What Went Wrong With TLC NAND

When Samsung pushed the envelope and introduced their TLC-NAND flash memory for general use, it had the makings of a landmark innovation. TLC (triple-level cell) NAND is cheaper to manufacture than either SLC or MLC NAND, as it works by fitting more data  into the same NAND cell—three bits per cell, rather than the one bit or two bits that single level (SLC) and multi-level (MLC) NAND can put away in one cell space.  You’d think TLC NAND would take over the market in short order; no reason to waste resources manufacturing more expensive SLC or MLC NAND.

When introduced the new TLC-NAND solid state drives seemed to have conquered all previous difficulties of TLC NAND with some state of the art firmware. Read speeds looked pretty; Samsung SSD 840’s 500MB/s is nothing to sneeze at and reliability was a non-issue.

But, mounting excitement over the potentially cost-effective storage innovation waned as performance problems were discovered.

In fact, it wasn’t long before users began reporting a new and extremely debilitating challenge. Those pretty read-speeds, that near 100% reliability: those only counted for new, freshly written data. Data that had been sitting on the drive for, say, all of eight weeks, would have deteriorated to a level that it could only be read at much slower speeds.

Meaning, by the time you had data sitting static on your drive for six months or a year, those previously high read-speeds would have been reduced to processing at a snail’s pace.

It turns out that this is a problem inherent in the TLC system. Although there’s a voltage drift that happens in every NAND drive over time, in SLC and MLC NAND, this drift is small, consistent, and can be accounted for in the reading algorithms. When you lock three data bits in a cell, though, data deterioration speeds up immensely. What’s worse, there’s no longer a generalized algorithm that can take all the shifting into account, so the old data is simply blurred.

Samsung has introduced two firmware updates in an attempt to smooth over the problem. The first, a fancy algorithm that was meant to take account of the voltage drift and factor it in where necessary, completely failed at solving the issue.

The second, while more successful, offers a somewhat unpleasant workaround: The drive is set to rewrite all data regularly, so nothing ever is old.  It does manage to get around the problem: if all data is new data, it will all be readable and quickly accessible. However, since every NAND SSD has a finite number of writes or rewrites, this isn’t an ideal fix.

What does this all boil down to?
Simply that TLC NAND is not the future of data storage, and it doesn’t even have a good seating in the present. If your data matters in the long term, you’ll want to go with a higher-quality NAND: MLC-NAND for your basic SSD needs, or SLC-NAND for industrial use or super-sensitive data storage. There’s no other way about it.

Related posts: