Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
Asynchronous Federated Learning
Distributed learning paradigm where clients update the global model independently without requiring strict synchronization between participants.
Stochastic Convergence
Mathematical property ensuring that the algorithm converges to an optimal solution despite asynchronous updates and uncertainty in arrival orders.
Lock-Free Update
Technique allowing clients to modify the global model without using mutual exclusion mechanisms, thereby reducing bottlenecks.
Temporal Shift Tolerance
System's ability to maintain performance despite significant variations in response times of participating clients.
Dynamic Weighted Aggregation
Method of aggregating updates where weights are adjusted based on the freshness and quality of client contributions.
Update Buffer
Temporary data structure storing client updates pending integration into the global model.
Asynchronous Selection Strategy
Algorithm determining which clients are eligible to participate in each training round without centralized coordination.
Variable Communication Latency
Inherent variation in transmission times between clients and the central server in a non-synchronous distributed environment.
Eventual Consistency
Consistency model guaranteeing that all replicas of the model will eventually converge to the same state if no new updates are made.
Asynchronous Federated Averaging Algorithm
Extension of FedAvg adapted for asynchronous environments where averages are calculated based on available updates rather than synchronized ones.
Model Degradation
Phenomenon of temporary deterioration in model performance when integrating outdated asynchronous updates.
Parallel Synchronization
Hybrid approach where groups of clients work in parallel asynchronously but synchronize their results at predefined checkpoints.
Shared Global State
Representation of the central model simultaneously accessible by multiple clients without strict locking mechanisms.
Distributed Non-Convex Optimization
Optimization problem where the objective function has multiple local minima and is solved in a distributed manner without synchronization.
Asynchronous Convergence Theorem
Theoretical framework establishing the conditions under which asynchronous federated learning algorithms converge to stationary points.
Temporal Heterogeneity
Variation in clients' computing capabilities and availability at different times, affecting the frequency of their contributions.
Flow Control Mechanism
System regulating the number of simultaneous updates to prevent central server overload while maximizing throughput.
Parameter Staleness
Measure of the obsolescence of a client update relative to the current state of the global model at the time of its application.
Asynchronous Gradient Algorithm
Variant of gradient descent where gradient calculations and parameter updates are performed without temporal coordination.
Delay Compensation Strategy
Technique for adjusting delayed updates to mitigate the negative impact of their staleness on model convergence.