🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Weight Sharing

Method where multiple neural connections share the same parameters to significantly reduce the number of unique weights.

📖
termini

Low-Rank Factorization

Decomposition of weight matrices into products of lower-rank matrices to compress dense network layers.

📖
termini

Tensor Decomposition

Advanced technique factorizing convolutional weight tensors into simpler tensors to reduce computational complexity.

📖
termini

Sparse Coding

Representation of activations with many zero coefficients, enabling efficient compression and accelerated computation.

📖
termini

Huffman Coding

Lossless compression algorithm assigning variable-length binary codes to weights based on their frequency of occurrence.

📖
termini

Model Splitting

Division of a model into segments distributed between clients and server to minimize communication while preserving confidentiality.

📖
termini

Parameter Binarization

Conversion of weights into binary values (+1/-1) to drastically reduce memory and accelerate calculations on limited devices.

📖
termini

Federated Averaging

Aggregation algorithm weighting local model updates according to client dataset sizes for global convergence.

📖
termini

Model Pruning Ratio

Percentage of weights or neurons removed from the original model, determining the level of compression applied.

📖
termini

Quantization-aware Training

Training that incorporates the effects of quantization to minimize performance degradation after compression.

🔍

Nessun risultato trovato