KI-Glossar
Das vollständige Wörterbuch der Künstlichen Intelligenz
Convergence Rate
Measure of the speed at which the federated learning algorithm reaches an optimal solution or stationary point, influenced by data heterogeneity and communication between clients.
Local Updates
Number of optimization iterations performed locally on each client before communication with the central server, directly impacting convergence and computational efficiency.
Global Model
Aggregated model resulting from the federation of contributions from all participating clients, representing the collective knowledge of the distributed system.
Gradient Compression
Technique reducing the size of gradients transmitted between clients and server through quantization or sampling, improving communication efficiency while preserving convergence.
Differential Privacy
Theoretical framework ensuring confidentiality by adding controlled noise to local updates, impacting the trade-off between privacy and federated model convergence.
Byzantine Fault Tolerance
System robustness against malicious or faulty clients sending incorrect updates, requiring detection and aggregation mechanisms that are resistant.
Asynchronous Federated Learning
Training paradigm where clients update the global model asynchronously, reducing waiting times but complicating convergence analysis.
Model Heterogeneity
Management of architectural heterogeneity between client models in federated systems, requiring adapted aggregation strategies to ensure convergence.
Convergence Analysis
Theoretical study of conditions guaranteeing the convergence of federated algorithms, taking into account data heterogeneity, failures, and communication constraints.
Optimization Landscape
Collective loss surface in federated learning, characterized by multiple local optima due to the non-IID data distribution among clients.
Client Sampling
Strategy for selecting a subset of clients at each training round, influencing convergence speed and representation fairness in the global model.
Momentum in Federated Learning
Convergence acceleration technique using the history of local or global gradients to stabilize and accelerate optimization in distributed environments.
Convergence Guarantees
Theoretical properties ensuring that the federated algorithm will converge under certain conditions, including bounds on convergence rate and final model quality.
Federated Optimization
Discipline studying optimization methods specific to federated learning constraints, combining optimization theory and distributed systems.
System Heterogeneity
Variability in computational and network capabilities among clients, directly impacting convergence strategies and requiring adaptive approaches.