Revisiting Causal Consistency
The project will investigate Transactional Causal Consistency, which extends causal consistency.
Distributed data systems depend on strict latency and availability. That has encouraged many researches on consistency models for geo-replication. While strong consistency leads to latency issues and degrades performance, eventual consistency complicates the programming model. Therefore, the causal consistency model has gained traction because it is accepted as the best strategy for geo-replicated data stores. However, a joint-research initiative by EPFL’s Operating Systems and Distributed Computing laboratories questions this conclusion and shows that causal consistency has inherent limitations that affect scalability and speed.
The project will revisit the entire gamut of research on data store consistency by covering both theory and systems, study the trade-off between latency and scalability, and then propose new designs for the implementation of causal consistency that surpass existing models.
The research adopts the hypothesis that the overheads incurred by COPSSNOW, the first causal consistency system to implement latency-optimal read-only transactions (ROTs), can jeopardize the performance of the system because it increases resource utilization and reduces throughput. Based on this hypothesis, the study will demonstrate that the overhead on writes is inherent in the causal consistency model. The research will develop a design that can offer the optimal trade-off between various performance goals, and then implement that design in a new system.
Among its long-term goals, the project will investigate Transactional Causal Consistency, which extends causal consistency with the abstraction of generic read-write transactions.
The project is being undertaken by Diego Didona, a post-doctoral researcher at LABOS, but it will be founded on the work done by both the laboratories on distributed data platforms and protocols.