Nick Jheng, Regional Manager for the Middle East from Synology points out that flash is not perfect, storage is corruptible, and the total cost of software ownership is better than licensing costs
Similar to all areas of the information technology industry, storage drives, backup and recovery data management, are also going through cycles of improved innovation. The increasing importance of cloud, software-defined management, persistent memory, and flash technologies are some of the areas, that data center administrators need to come to new terms with. Using an approach to the total cost of ownership throws up some important conclusions.
Licenses for backup and recovery of data are available on a subscription basis and as perpetual licensing. At first glance, the cost of subscription licenses or monthly licenses with an annual contract, appear more economical than perpetual licensing. However, over a period of software usage time, various additional costs begin to accumulate, creating a need to look at the overall total cost of ownership.
Typically, IT organizations, as they use applications, will tend to buy additional support, maintenance, and patches, and upgrade services, each of which has an additional cost. Annual support services are typically in the range of 25% of perpetual licensing fees. Taking VMware as an example, the total cost of ownership is quite different if it is computed using a per CPU per-socket basis and per-host basis.
While the rapid read and write capability of solid-state drives (SSDs) are well known, there are limitations in their longer-term capabilities, and every data center administrator should be aware of this. SSDs work by writing and erasing data to NAND blocks. NAND blocks are the smallest blocks of data storage in an SSD drive and have a limited life span of read-write capabilities. Data in NAND blocks cannot be overwritten and must be erased first. As a result, the performance of SSDs varies over a period and continues to degrade resulting in a limited life span.
Algorithms are written inside SSDs help to distribute the usage of the NAND blocks so that the wear and tear due to the erase function is distributed across the overall material. However, this can only be done in the background and requires a certain percentage of the NAND blocks to be reserved for this back and forth movement of data. This is called overprovisioning of the SSD and includes partitioning and reserving a percentage of good NAND blocks for these operations.
Therefore, if the total capacity of the SSD is 1 Terabyte, after set up the administrator may find that the effective storage area inside the SSD is only 950GB. As the usage of the SSD progress, this percentage of the usable area continues to reduce, while ensuring that it is available for high performance compute.
Catastrophic data loss is often linked to the increase in bad sectors on a traditional hard disk. Bad sectors get built upon the surface of hard disks when there are wear and tear, collision, over-heating, and file-system errors amongst others. As the number of bad sectors build-up, sequential writing and reading of data gets disturbed as alternative available blocks need to be found, while bad sectors are skipped. The process of skipping bad sectors and finding good sectors to write on is called remapping.
Hard disks with a higher number of bad sectors will go through long periods of remapping that will slow down access to data from the hard disk. Continuous remapping and the increase of bad sectors will eventually be followed by a catastrophic data failure of various sorts. Hard disk drives that have developed bad sectors are 10 times more likely to fail than those hard disk drives without bad sectors.