tcmu-runner is an application daemon that can handle the execution of SCSI commands sent by the kernel SCSI target sub component, allowing exporting SCSI Logical Units (LUNs) to be backed by regular files or block devices.
LinuxIO (LIO™) is the standard open-source SCSI target implementation in the Linux® kernel. LIO supports all prevalent storage fabrics, including Fibre Channel, FCoE, iEEE 1394, iSCSI, NVMe-OF, iSER, SRP, etc.
The Target Core Module Userspace (TCMU) implements a fabric that creates a link
between the kernel SCSI target infrastructure and a user space application.
The kernel level module involved is
target_core_user and can be viewed as a
tcmu-runner implements the userspace level side of the ank processing, handling the details of the TCMU interface (UIO, netlink, pthreads, and DBus). tcmu-runner exports a more simple C plugin API allowing the creation of file handlers to emulate various device types. This organization is shown in the figure below.
The ZBC file handler implements a SCSI ZBC host aware or host managed disk emulation using TCMU C plugin API. This handler uses a regular file as the backend storage for the emulated device.
With this infrastructure setup, any command issued by an application or by a kernel component (e.g. a file system) will be sent to the tcmu-runner daemon through the TCMU kernel driver. The file handler can process the command in user space using regular POSIX system calls and a reply sent back on completion of the command processing. From the point of view of the application or kernel component using the emulated disk, all accesses appear as executing on actual hardware.
The control of tcmu-runner emulated devices is achieved using the targetcli utility available as a package with most distributions. For instance, on Fedora® Linux, tcmu-runner and targetcli can be installed using the following commands.
tcmu-runner relies on the loopback virtual SAS adapter kernel module to expose the emulated device as a regular disk to the kernel SCSI stack. Enabling this kernel module first requires that support for the Generic Target Core Mod (TCM) and ConfigFS Infrastructure be enabled from the top-level Device Drivers menu.
With this infrastructure enabled, the configuration option CONFIG_TCM_USER2 and CONFIG_LOOPBACK_TARGET can be enabled.
tcmu-runner ZBC file handler is compiled and installed by default. This handler allows the creation of emulated ZBC disks with a regular file used as backing storage.
The ZBC file handler supports the emulation of both host aware and host managed SCSI disks. Furthermore, the characteristics of the emulated device can all be configured. The following table shows the configuration parameters available.
|model-type||Device model type, HA for host aware or HM for host managed||HM|
|lba-size (B)||LBA size in bytes (512 or 4096)||512|
|zsize-size (MiB)||Zone size in MiB||256 MiB|
|conv-num||Number of conventional zones at LBA 0 (can be 0)||Number of zones corresponding to 1% of the device capacity|
|open-num||Optimal (for host aware) or maximum (for host managed) number of open zones||128|
These parameters are always grouped together into a configuration string with
/[opt1[/opt2][...]@]path_to_backing_file. For instance, to specify
a host managed disk with 128MB zone size, 100 conventional zones and the file
/var/local/zbc0.raw as backing storage, the following configuration string can
The following example shows how to create a small 20 GB host managed ZBC disk
with 10 conventional zones and a 256 MiB zone size, with the file
/var/local/zbc0.raw used as backing storage. The emulated disk will be locally
emulated using the loopback interface. This require that tcmu-runner be
executed on the system.
With tcmu-runner running, the targetcli command is used to create the emulated disk.
The backstore create command specifies the emulated disk capacity with the
size=20G. The backing file
/var/local/zbc0.raw will be created if
necessary and resized to match the requested capacity.
When the backstore is linked to
lun0 of the loopback link, the emulated device
becomes visible by the kernel and its management initialized in the same manner
as with physical devices. This can be seen in the kernel messages log.
The disk can now be listed with tools such as lsblk and lsscsi.
All ZBD compliant tools and applications will be able to access and control the emulated disk in exactly the same manner as a physical device. For instance, libzbc graphical interface (gzbc) can be used to display the emulated disk zones.
The following script is useful to create an emulated disk with a single command.
Tearing down an emulated disk can also be automated with a single command line as shown below.