This Is Auburn

Bilevel Optimization in Wireless Federated Learning

Abstract

Federated learning (FL) has become a cornerstone of privacy-preserving, distributed machine learning, allowing multiple clients to collaboratively train models without centralizing their data. While significant progress has been made in designing communication-efficient and robust FL algorithms, most existing methods focus on single-level optimization and assume synchronous or cooperative participation. In contrast, many real-world applications involve hierarchical bilevel formulations, complex constraints, stochastic conditional structures, and uncertainty quantification, all of which remain underexplored in federated settings. This dissertation addresses these gaps by developing new theoretical frameworks and algorithmic solutions for federated bilevel and stochastic optimization under both cooperative and anarchic environments. The first part of the dissertation introduces Anarchic Federated Bilevel Optimization (AFBO), which extends bilevel optimization to federated systems with asynchronous, unreliable, and uncoordinated participation. This framework highlights fundamental challenges in convergence and fairness when central coordination is weakened. To address scalability, a Single-Loop Anarchic Federated Bilevel Optimization (SL-AFBO) algorithm is proposed, avoiding nested optimization while preserving theoretical guarantees and significantly reducing computational and communication costs. The second part explores constrained bilevel optimization in federated systems, with a focus on cases where the lower-level problem is convex and subject to explicit constraints. This contribution enables applications such as resource allocation, fairness enforcement, and policy adherence, and provides algorithms with rigorous convergence analysis. The third part extends the scope beyond bilevel problems by proposing Anarchic Federated Conditional Stochastic Optimization (AFCSO), which models objectives defined over conditional and evolving distributions across heterogeneous clients. AFCSO captures non-stationary data environments and client-level uncertainty, providing robust optimization strategies for dynamic federated systems. Finally, the dissertation introduces Federated Bayesian Bilevel Optimization (FBBO), which incorporates probabilistic modeling into bilevel optimization to quantify uncertainty explicitly. This Bayesian framework enables reliable decision-making under incomplete information and demonstrates advantages in sensitive applications where uncertainty is critical. Collectively, the contributions of this dissertation establish a comprehensive theoretical and algorithmic foundation for federated bilevel and stochastic optimization. Each framework is accompanied by rigorous convergence analysis and validated through extensive empirical studies on benchmark and real-world datasets. By addressing key challenges such as anarchic participation, constrained optimization, conditional stochasticity, and Bayesian uncertainty, this work significantly broadens the applicability and robustness of federated learning, paving the way for its adoption in increasingly complex and decentralized environments.