Słownik AI
Kompletny słownik sztucznej inteligencji
Federated Learning
Distributed learning approach where ML models train locally on edge devices without sharing raw data, only model updates are centrally aggregated.
Model Quantization
Technique for reducing the numerical precision of ML model weights and activations (typically from 32-bit to 8-bit) to optimize its size and inference time on edge devices.
TinyML
Specialized field of machine learning focused on deploying ultra-lightweight models on microcontrollers with extreme memory (few KB) and power consumption constraints.
Edge Inference
Process of executing ML predictions directly on edge devices, eliminating dependence on cloud servers and ensuring sub-millisecond response times.
On-Device Training
Ability to train or retrain ML models directly on edge devices, enabling continuous adaptation based on local data without transfer to the cloud.
Edge Device Management
Set of processes and tools for remote deployment, monitoring, maintenance, and updating of ML models on thousands of distributed edge devices.
Continuous Edge Learning
Paradigm where edge models continuously improve from new local data, with incremental updates periodically synchronized with the cloud.
Bandwidth-Aware Training
Training strategy that optimizes model update size and synchronization frequency based on available network bandwidth constraints.
Latency-Aware Deployment
Deployment approach that selects and optimizes model architectures based on latency requirements specific to each critical edge application.
Resource-Constrained ML
Branch of ML specialized in developing algorithms and models optimized to run efficiently under strict CPU, memory, and energy constraints.
Edge Model Versioning
Version tracking system for ML models deployed on edge devices, enabling rapid rollbacks and complete deployment traceability.
Edge-to-Cloud Orchestration
Coordination architecture that optimizes the distribution of ML tasks between edge and cloud based on real-time constraints, available resources, and privacy requirements.
On-Device Model Compression
Techniques applied directly on the edge device to dynamically reduce model size based on operational conditions and resource usage.
Edge Model Monitoring
Continuous monitoring of performance and drift of ML models in production on edge devices, with alerts and triggers for automatic retraining.
Adaptive Edge Inference
Mechanism that dynamically adjusts the complexity of the inference model based on available resources and real-time accuracy requirements.
Edge Model Synchronization
Process of coordinating model updates between edge devices and central server, managing conflicts and ensuring consistency while minimizing network traffic.