Memory-Resident MapReduce and SSD25 Aug 2015
HPC systems are typical compute-centric paradigm. Compute nodes in HPC clusters are not equipped with any local persistent storage systems and are connected to a parallel file system from the storage backend for data I/O.
For the data-intensive computing, the intermediate data are often too large to stored on the computing nodes. A major effort to run data intensive computing on HPC system is to intergrate high performance Solid State Drives to the compute nodes.
A performance comparison between using RamDisk and SSDs as backend storage shows that:
When the storing phase of intermediate data becomes the major bottleneck of job execution, the throughput of data shuffling then becomes SSD-bound.
The Inefficiency in Utilizing SSD
When multiple data-intensive tasks are running and issuing a large number of write requests, such oblivity can result in substantial interference among tasks… early
tasks can take advantage of write buffer and clean blocks on SSDs. They can quickly complete their work. When the buffer gradually fills up and clean SSD blocks are depleted, internal operations for delayed write and garbage collection are activated. These operations start to interfere with the execution of ShuffleMapTasks.