liblightnvm is a user space library that manages provisioning and I/O submission for physical flash. The motivation is to enable I/O-intensive applications to implement their own Flash Translation Tables (FTLs) directly on their logic. Our system design is based on the principle that high-performance I/O applications use log-structured data structures to map flash constrains and implement their own data placement, garbage collection, and I/O scheduling strategies. Using liblightnvm, applications have direct access to a physical flash provisioning interface, and perceive storage as large single address space flash pool from which they can allocate flash blocks from individual NAND flash chips (LUNs) in the SSDs. This is relevant to match parallelism in the host and in the device. Each application submits I/Os using physical addressing that corresponds to specific block, page, etc. on device.
liblightnvm relies on Open-Channel SSDs, devices that share responsibilities with the host to enable full host-based FTLs; and LightNVM, the Linux Kernel subsystem supporting them. To know more about Open-Channel SSDs and the software ecosystem around them please see .
For example, most key-value stores use Log Structured Merge Trees (LSMs) as their base data structure (e.g., RocksDB, MongoDG, Apache Cassandra). The LSM is in itself a form of FTL, which manages data placement and garbage collection. This class of applications can benefit from a direct path to physical flash to take full advantage of the optimizations they do and spend host resources on, instead of missing them through the several levels of indirection that the traditional I/O stack imposes to enable genericity: page cache, VFS, file system, and device physical – logical translation table. liblightnvm exposes append-only primitives using direct physical flash to support this class of applications.