Collaborative machine learning (ML) from decentralized data has garnered significant attention in both academia and industry in recent years. Its key characteristic lies in multiple agents/nodes collaboratively training a shared ML model, either through the coordination of a central parameter server or in a fully decentralized manner using peer-to-peer information exchange. However, the application of distributed ML over wireless networks brings forth new challenges, including communication resource limitations (such as frequency, time, and space) and channel uncertainty. These factors can significantly impact learning performance and training latency.

The first part of this talk provides an overview of important research directions in server-based federated learning (FL) over wireless networks, including resource allocation design, the effects of asynchronous training, privacy and security issues, and energy efficiency. The second part of the talk focuses on serverless consensus-based decentralized learning, considering the actual communication delay per iteration using broadcast transmission for information exchange among locally connected nodes. We introduce a novel communication framework called BASS (BroadcAst-based Subgraph Sampling), demonstrating how faster convergence can be achieved through random sampling of sparser subgraphs of the base topology with proper communication coordination.

August 30 @ 15:10
15:10 — 15:40 (30′)

Prof. Zheng Chen (Linköping University – SE)