🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Scaling Laws

Mathematical principles describing how deep learning model performance improves predictably with increases in model size, data, and computation.

📖
termini

Power Law Scaling

Mathematical relationship where model performance follows a power law based on factors such as model size, number of parameters, or amount of data.

📖
termini

Chinchilla Scaling Laws

Specific scaling laws discovered by DeepMind suggesting that current models are undertrained and that data is more important than previously thought for optimizing performance.

📖
termini

Compute-Optimal Scaling

Strategy for optimally allocating computational resources between model size and training data quantity to maximize performance at a fixed budget.

📖
termini

Data Scaling Laws

Principles describing how increasing the amount of training data influences model performance, often following a power law relationship with saturation.

📖
termini

Model Size Scaling

Study of how model capabilities evolve based on the number of parameters, revealing predictable improvements up to certain saturation points.

📖
termini

Token Scaling

Analysis of the impact of the number of training tokens on model performance, essential for determining the optimal amount of textual data.

📖
termini

Emergent Abilities

Capabilities that suddenly appear in large models at certain critical scales, which are not present in smaller models of the same family.

📖
termini

Phase Transitions

Abrupt changes in model behavior or performance that occur at specific size or data thresholds.

📖
termini

Neural Scaling Laws

General theoretical framework unifying empirical observations on neural network scaling across different architectures and tasks.

📖
termini

Kaplan Scaling Laws

First empirical scaling laws established by OpenHub, showing power relationships between model size, data, and performance.

📖
termini

IsoFLOP Curves

Performance curves at constant FLOP budget allowing comparison of different architectures or training strategies at equal computational cost.

📖
termini

Critical Batch Size

Optimal batch size beyond which further increase no longer produces significant improvements in training speed.

📖
termini

Double Descent

Phenomenon where test error decreases, increases, and then decreases again as model size exceeds the data interpolation point.

📖
termini

Grokking

Phenomenon where models suddenly acquire generalizable understanding after a long period of apparent overfitting.

📖
termini

Sharpness-Aware Minimization

Optimization technique seeking flat minima in the loss landscape, particularly important for the stability of large models.

📖
termini

Loss Scaling

Prediction of the evolution of the loss function based on allocated resources, allowing performance estimation before training.

📖
termini

Performance Plateaus

Phases of stagnation in performance improvement despite increasing resources, indicating limits in current scaling laws.

📖
termini

Scaling Exponent

Crucial parameter in power laws determining the rate of performance improvement relative to resource increase.

📖
termini

Scaling Coefficient

Multiplicative constant in scaling equations determining the baseline performance level before applying scaling effects.

🔍

Nessun risultato trovato