Machine learning has become ubiquitous today with applications ranging from accurate diagnosis of skin cancers and cardiac arrhythmia to recommendations on streaming channels and gaming. However, in the distributed machine learning scheme, what if one ‘worker’ or ‘peer’ is compromised? How can the aggregation system be resilient to the presence of such an adversary?

Although there are few existing solutions to make machine learning robust and efficient in the face of adversarial behavior, their success is limited. To tackle this problem, EPFL’s Rachid Guerraoui, Full Professor at the School of Computer and Communication Sciences, has proposed a new research to account for all kinds of adversarial behavior and build practical and robust distributed learning solutions.

The research stems from Prof Guerraoui’s past studies on the issue of adversarial (Byzantine) behavior. He has authored several papers on distributed machine learning and developed schemes that are resilient to malfunctions in both worker-server and peer-to-peer implementations. Two solutions introduced by Prof Guerraoui and colleagues are Krum—an update rule to guarantee convergence despite Byzantine workers—and Bulyan—an effective solution that achieves convergence without being susceptible to existing aggregation rules.

Apart from distributed machine learning, Prof. Guerraoui has worked extensively on secure distributed storage, transactional shared memory and distributed programming languages. He has also co-authored a book on Transactional Systems (Hermes) and another on reliable distributed programming (Springer).