🏠 Trang chủ
Benchmark
📊 Tất cả benchmark 🦖 Khủng long v1 🦖 Khủng long v2 ✅ Ứng dụng To-Do List 🎨 Trang tự do sáng tạo 🎯 FSACB - Trình diễn cuối cùng 🌍 Benchmark dịch thuật
Mô hình
🏆 Top 10 mô hình 🆓 Mô hình miễn phí 📋 Tất cả mô hình ⚙️ Kilo Code
Tài nguyên
💬 Thư viện prompt 📖 Thuật ngữ AI 🔗 Liên kết hữu ích

Thuật ngữ AI

Từ điển đầy đủ về Trí tuệ nhân tạo

162
danh mục
2.032
danh mục con
23.060
thuật ngữ
📖
thuật ngữ

Scaling Laws

Mathematical principles describing how deep learning model performance improves predictably with increases in model size, data, and computation.

📖
thuật ngữ

Power Law Scaling

Mathematical relationship where model performance follows a power law based on factors such as model size, number of parameters, or amount of data.

📖
thuật ngữ

Chinchilla Scaling Laws

Specific scaling laws discovered by DeepMind suggesting that current models are undertrained and that data is more important than previously thought for optimizing performance.

📖
thuật ngữ

Compute-Optimal Scaling

Strategy for optimally allocating computational resources between model size and training data quantity to maximize performance at a fixed budget.

📖
thuật ngữ

Data Scaling Laws

Principles describing how increasing the amount of training data influences model performance, often following a power law relationship with saturation.

📖
thuật ngữ

Model Size Scaling

Study of how model capabilities evolve based on the number of parameters, revealing predictable improvements up to certain saturation points.

📖
thuật ngữ

Token Scaling

Analysis of the impact of the number of training tokens on model performance, essential for determining the optimal amount of textual data.

📖
thuật ngữ

Emergent Abilities

Capabilities that suddenly appear in large models at certain critical scales, which are not present in smaller models of the same family.

📖
thuật ngữ

Phase Transitions

Abrupt changes in model behavior or performance that occur at specific size or data thresholds.

📖
thuật ngữ

Neural Scaling Laws

General theoretical framework unifying empirical observations on neural network scaling across different architectures and tasks.

📖
thuật ngữ

Kaplan Scaling Laws

First empirical scaling laws established by OpenHub, showing power relationships between model size, data, and performance.

📖
thuật ngữ

IsoFLOP Curves

Performance curves at constant FLOP budget allowing comparison of different architectures or training strategies at equal computational cost.

📖
thuật ngữ

Critical Batch Size

Optimal batch size beyond which further increase no longer produces significant improvements in training speed.

📖
thuật ngữ

Double Descent

Phenomenon where test error decreases, increases, and then decreases again as model size exceeds the data interpolation point.

📖
thuật ngữ

Grokking

Phenomenon where models suddenly acquire generalizable understanding after a long period of apparent overfitting.

📖
thuật ngữ

Sharpness-Aware Minimization

Optimization technique seeking flat minima in the loss landscape, particularly important for the stability of large models.

📖
thuật ngữ

Loss Scaling

Prediction of the evolution of the loss function based on allocated resources, allowing performance estimation before training.

📖
thuật ngữ

Performance Plateaus

Phases of stagnation in performance improvement despite increasing resources, indicating limits in current scaling laws.

📖
thuật ngữ

Scaling Exponent

Crucial parameter in power laws determining the rate of performance improvement relative to resource increase.

📖
thuật ngữ

Scaling Coefficient

Multiplicative constant in scaling equations determining the baseline performance level before applying scaling effects.

🔍

Không tìm thấy kết quả