Workload Skew in The Log-Structured File System

The paper SFS: Random Write Considered Harmful in Solid State Drives was presented on FAST 2012 by Changwoo Min.

The poor random write performance is the major problems limiting wider deployment of SSDs. The Log-Structured File System reduce the number of random writes by converting them to sequential logs. However, the segment cleaning (also known as garbage collection) overhead in the log-structured file system can severely degrade performance, and shorten the lifespan of an SSD. This paper propose to utilize the skewness in workloads to future reduce the write cost.

It is long observed that I/O workloads have non-uniform access frequency distribution. The block update frequency of I/O workloads is highly skewed in that 90% of the writes go to the 1% of blocks:


Like the paper on FAST 15, which Reduce Garbage Collection Interference in SSDs Through Workload Isolation, this paper propose segment isolation for the Log-Structured File System.


This paper isolate the frequently updated segment to reduce the write amplification introduced by the log-structured manner.