🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Federated Learning

Distributed learning approach where ML models train locally on edge devices without sharing raw data, only model updates are centrally aggregated.

📖
Begriffe

Model Quantization

Technique for reducing the numerical precision of ML model weights and activations (typically from 32-bit to 8-bit) to optimize its size and inference time on edge devices.

📖
Begriffe

TinyML

Specialized field of machine learning focused on deploying ultra-lightweight models on microcontrollers with extreme memory (few KB) and power consumption constraints.

📖
Begriffe

Edge Inference

Process of executing ML predictions directly on edge devices, eliminating dependence on cloud servers and ensuring sub-millisecond response times.

📖
Begriffe

On-Device Training

Ability to train or retrain ML models directly on edge devices, enabling continuous adaptation based on local data without transfer to the cloud.

📖
Begriffe

Edge Device Management

Set of processes and tools for remote deployment, monitoring, maintenance, and updating of ML models on thousands of distributed edge devices.

📖
Begriffe

Continuous Edge Learning

Paradigm where edge models continuously improve from new local data, with incremental updates periodically synchronized with the cloud.

📖
Begriffe

Bandwidth-Aware Training

Training strategy that optimizes model update size and synchronization frequency based on available network bandwidth constraints.

📖
Begriffe

Latency-Aware Deployment

Deployment approach that selects and optimizes model architectures based on latency requirements specific to each critical edge application.

📖
Begriffe

Resource-Constrained ML

Branch of ML specialized in developing algorithms and models optimized to run efficiently under strict CPU, memory, and energy constraints.

📖
Begriffe

Edge Model Versioning

Version tracking system for ML models deployed on edge devices, enabling rapid rollbacks and complete deployment traceability.

📖
Begriffe

Edge-to-Cloud Orchestration

Coordination architecture that optimizes the distribution of ML tasks between edge and cloud based on real-time constraints, available resources, and privacy requirements.

📖
Begriffe

On-Device Model Compression

Techniques applied directly on the edge device to dynamically reduce model size based on operational conditions and resource usage.

📖
Begriffe

Edge Model Monitoring

Continuous monitoring of performance and drift of ML models in production on edge devices, with alerts and triggers for automatic retraining.

📖
Begriffe

Adaptive Edge Inference

Mechanism that dynamically adjusts the complexity of the inference model based on available resources and real-time accuracy requirements.

📖
Begriffe

Edge Model Synchronization

Process of coordinating model updates between edge devices and central server, managing conflicts and ensuring consistency while minimizing network traffic.

🔍

Keine Ergebnisse gefunden