Skip to content

Shingled Magnetic Recording

Shingled Magnetic Recording (SMR) is a magnetic storage data recording technology used in hard disk drives (HDDs) to provide increased areal density compared to same-generation drives using conventional magnetic recording (CMR) technology, resulting in a higher overall per-drive storage capacity.

SMR Overview

Conventional magnetic recording places gaps between recording tracks on HDDs to account for Track mis-registration (TMR) budget. These separators impact areal density, as portions of the platter surface are not being fully utilized. Shingled magnetic recording removes the gaps between tracks by writing tracks in an overlapping manner, forming a pattern similar to shingles on a roof. Physically, this is done by writing the data sequentially, then overlapping (or “shingling”) it with another track of data. By repeating this process, more data tracks can be placed on each magnetic surface. The figure below illustrates this principle.

shingled-tracks
SMR disks track organization

The write head designed for SMR drives is wider than required for a single track of data. It produces a stronger magnetic field suitable for magnetizing films of high coercivity. Once one track has been written, the recording head is advanced by only part of its width, so the next track will partially overwrite the previous one, leaving only a narrow band for reading.

Overlapping tracks are grouped into bands called zones of fixed capacity for more effective data organization and partial update capability. Recording gaps between zones are laid to prevent data overwrite by the wide write head from one zone to another.

overlap
SMR disks Overlapping tracks

Fundamental Implications of SMR

Because of the shingled format of SMR, all data streams must be organized and written sequentially to the media. While the methods of SMR implementation may differ (see SMR Implementations section below), the data nonetheless must be written to the media sequentially. Consequently, should a particular data sector need to be modified or re-written, the entire “band” of tracks (zone) must be re-written. Because the modified data sector is potentially under another “shingle” of data, direct modification is not permitted, unlike traditional CMR drives

In the case of SMR, the entire row of shingles above the modified track of the sector to modify needs to be rewritten in the process. SMR hard disks still provide true random-read capability, allowing rapid data access like any traditional CMR drive. This makes SMR an excellent technology candidate for both active archive and higher-performance sequential workloads.

SMR Interface Implementations

The interface of SMR disks can implement different zone management methods, or models, with visible differences from the host and user point of view. It is important to understand their differences, as not all implementation options are appropriate for a particular storage application. The three models that are in use today are:

  • Drive Managed allows plug-and-play deployment without modifying host software. It accommodates both sequential and random writing, but can result in highly unpredictable device performance.

  • Host Managed requires host-software modification as it accommodates only sequential write workloads to deliver both predictable performance and control at the host level.

  • Host Aware offers the convenience and flexibility of the Drive Managed model, with the potential performance and control advantages of Host Managed implementations. The Host Aware model accommodates both sequential writes and some random writes but is the most complicated option to implement for hard disks. To obtain maximum benefit and predictability, a host should do much of the same work as with the Host Managed model.

Depending on the implementation model, the layout of the recording media and the performance characteristics of the resulting drive may differ.

Drive Managed Model

The major selling point for Drive Managed disks is that no change is required at the host side, resulting in a plug-and-play deployment with any existing system. To allow for this deployment ease, Drive Managed HDDs need to manage all random writes to sequential shingled writes by means of media caching and data location indirection. No responsibility or action is required of the host. The media cache provides a distributed storage area to lay down data at the best-possible speed (regardless of the host-provided block address associated with that data). Buffered data will be migrated from the media cache to the final destination as part of the drive’s background idle time function. In short, Drive Managed implementations require stronger caching algorithms and more random write space to temporarily hold non-sequential data.

Because the HDD constantly works to optimize the caching algorithm and sector indirection handling, performance is unpredictable for certain workloads such as large block random write with high duty cycles. Due to the wide range of performance variability and unpredictability, The Drive Managed zone model is considered impractical and unacceptable for enterprise-class deployments. Drive Managed disks are suitable for applications that have idle time for the drive to perform background tasks such as moving the data around. Examples of appropriate applications include client PC use and external backup HDDs in the client space. In the enterprise space, parallel operations of backup tasks become random write operations within the HDD, which typically result in unpredictable performance or significant performance degradation.

Host Managed Model

The Host Managed model is emerging as the preferred option for implementing the host interface of shingled magnetic recording disks. Unlike the Drive Managed model, the Host Managed model promotes management of the SMR sequential write constraint to the host to help optimizing data streams processing. This is the key difference with Drive Managed disks: Host Managed devices do not allow any random write operations within zones.

The host must manage all write operations to be in sequence within a particular zone by following a write pointer. Once data is written to the zone, the write pointer increments to indicate the starting point of the next write operation in that zone. Any out-of-order writes, that is, a write operation not starting at a zone write pointer location, will force the drive to abort the operation and flag an error. Recovery from such an error is the responsibility of the controlling host software. This enforcement allows Host Managed devices to deliver predictable, consistent performance.

With the Host Managed model, the device blocks are organized in a number of zones ranging from one to potentially many thousands. There are two types of zones: a Sequential Write Required Zone and an optional Conventional Zone. The Conventional Zone, which typically occupies a very small percentage of the overall drive capacity, can accept random writes and is typically used to store metadata. The Sequential Write Required zones occupy the majority of the overall drive capacity where the device enforces sequentiality of all write commands within zones. It should be noted that with the Host Managed model, random read commands are supported and perform comparably to that of standard drives.

Unlike the Drive Managed model, the Host Managed model is not backwards-compatible with legacy host storage stacks. However, Host Managed devices allow enterprises to maintain control and management of storage at the host level.

Host Aware Model

the Host Aware model is the superset of the Host Managed and Drive Managed models. Unlike the Host Managed model, the Host Aware model preserves compatibility with legacy host storage stacks.

With Host Aware devices, the host does not necessarily have to change the software or file system to make all writes to a given zone sequential. However, unless the host software is optimized for sequential writing, performance becomes unpredictable, similarly to Drive Managed disks. Therefore, if host applications require predictable and optimized performance, the host must take full responsibility for data streams and zone management, in a manner identical to Host Managed devices. In enterprise applications where there may be multiple operations or multiple streams of sequential writes, without host intervention, performance will quickly become unpredictable if zone block management is deferred at the drive level.

The Host Aware model also defines two types of zones: The Sequential Write Preferred Zone and an optional Conventional Zone. By contrast to the Host Managed model, the Sequential Write Required Zones are replaced with Sequential Write Preferred Zones, thereby allowing for random write commands in all zones. If the zone receives random writes that violate the write pointer, the zone becomes a non-sequential write zone. A Host Aware disk implementation provides internal measures to recover from out-of-order or non-sequential writes. To manage out-of-order writes and background defragmentation, the drive holds an indirection table, which must be permanently maintained and protected against power disruptions. Out-of-order data is recorded into the caching areas. From there, the drive can append the data to the proper zone.

Host Aware disks are ideal for sequential read/write scenarios where data is not overwritten frequently and latency is not a primary consideration.

Governing Standards

A new specification of commands have been defined for SMR disks implementing the Host Managed and Host Aware models. These new command interfaces are all standards-based and developed by the INCITS T10 committee for SCSI drives and the INCITS T13 committee for ATA drives. There is no specific industry standard for the Drive Managed model because it is backward compatible and purely transparent to hosts.

SCSI Standard: ZBC

The Zoned Block Command (ZBC) revision 05 is the published approved standard defining the new zone management commands and read/write command behavior for Host Managed and Host Aware SCSI drives. Implemented in conjunction with the applicable clauses of the SPC-5 and SBC-4 specifications, the ZBC specifications define the model and command set extensions for zoned block devices.

T10 members may access the latest ZBC document from the T10 web site here. Non-members may purchase the approved standard ANSI INCITS 536-2016: Information technology – Zoned Block Commands (ZBC).

ATA Standard: ZAC

The INCITS Technical Committee T13 is responsible for all interface standards relating to the popular AT Attachment (ATA) storage interface used with many disk drive today. The Zoned Device ATA Command Set (ZAC) is the published approved standard specifying the command set that host systems use to access storage devices that implement the Host Aware Zones feature set or the Host Managed Zones feature set. This standard is an extension to the ATA implementation standards described in AT Attachment - 8 ATA/ATAPI Architecture Model (ATA8-AAM) and provides a common command set for systems manufacturers, system integrators, software suppliers, and suppliers of storage devices that provide one of the zones feature sets.

T13 members may access the latest ZAC document from the T13 web site here. Non-members may purchase the approved standard ANSI INCITS 537-2016: Information technology – Zoned Device ATA Command Set (ZAC).

Zone Block Commands

The ZAC and ZBC standards describe the set of commands necessary for a host application to manage zones of a Host Managed or Host Aware drive. While these two standards describe commands for two separate protocols (SCSI and ATA), the zone management commands defined are semantically identical and the behavior of read and write commands defined are also compatible. In addition to the zone management commands, the ZBC and ZAC standards also both define the zone models discussed in the SMR Interface Implementations section.

Both standards define five zone management commands as extensions to the disk basic command set similar to that of a CMR drive.

  • REPORT ZONES is the command that a host implementation can use to discover the zone organization of a host managed or host aware drive. The REPORT ZONES command returns a list of zone descriptors indicating the starting LBA, size, type and condition of a zone. For sequential write required zones (Host Managed drives) and sequential write preferred zones (Host Aware drives), a zone descriptor also indicates the current position of the zone write pointer. This information allows host software to implement sequential write streams to zones.

  • RESET ZONE WRITE POINTER is the command that a host software can use to reset the location of a zone write pointer to the beginning of the zone. After execution of this command, all data that was written to the zone is lost and cannot be accessed.

  • OPEN ZONE A zoned block device requires internal resources (e.g. Persistent zone resources) to maintain each zone. Insufficient resources may result in degraded functionality (e.g. Reduced performance or increased power consumption). The OPEN ZONE command allows an application to explicitly open a zone indicating to the drive that resources necessary for writing a zone are kept available until the zone is fully written or the zone is closed using the CLOSE ZONE command. The performance benefits that can be achieved using this command are dependent on the drive implementation of zone management.

  • CLOSE ZONE allows an application to explicitly close a zone that was open using the OPEN ZONE command to indicate to the drive that the resources used for writing to a zone are no longer necessary and can be released.

  • FINISH ZONE allows an application to move the write pointer of a zone to the end of the zone to prevent any further write operations to the zone until it is reset.