As the number of cores grows, efficient concurrent data structures have become increasingly important because they reduce sequential execution times, and thereby yield scalable performance.
Non-blocking data structures have received particular attention in practice due to their synchronization properties that are relevant in multi-threaded environments, especially their platform-independent progress guarantees — i.e., the ability for threads to make progress without depending on the OS or VM schedulers. A number of concurrency libraries for programming languages such as C++ and Java have been developed. However, efficient non-blocking memory reclamation is still a challenge, especially in languages with manual memory management such as C++. In such settings, not only memory has to be safely reclaimed, but also memory usage must remain bounded. Otherwise, memory can be exhausted, which violates progress properties.
Wait-free data structures are critical in latency-sensitive applications, as they offer the strongest progress property: all threads make progress in a bounded number of steps. Although they were historically slower and more tedious to implement than their lock-free counterparts, they are now gaining more traction due to a number of techniques such as Kogan and Petrank's fast-path-slow-path methodology.
Wait-free memory reclamation, until recently, remained a challenge since existing methodologies cannot be directly applied to the memory reclamation problem. One way to address the challenge is to use non-blocking techniques such as Hazard Pointers or Hazard Eras but modify data structures, e.g., use a "normalized" Timnat and Petrank's form. The "normalized" form guarantees that any operation that does not make progress can be restarted. The process of normalization has several limitations, e.g., requires that only CAS (compare-and-swap) operations are used by data structures. However, there is a growing interest in using specialized instructions such as FAA (fetch-and-add) for which no general normalization methodology exists.
What if there was a more general way to organize reclamation in wait-free data structures regardless of their presentation? A recent algorithm, WFE (Wait-Free Eras), achieves exactly that by extending (lock-free) Hazard Eras.
In this talk, we discuss the memory reclamation problem, existing techniques, and how a recent wait-free WFE technique is constructed. We discuss what key challenges WFE solves so that it can be universally applied regardless of the data structure organization.
Program committee comment
Ruslan has experience in both the industry and academia so that he knows what people really need. In this talk, he covers a fundamental question for concurrent programming: how to organize memory reclamation in the most efficient way possible?