🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

Learning without Forgetting (LwF)

Approach that uses knowledge distillation to preserve the model's responses on old data while learning a new task. The original model serves as a teacher to guide the updated model, thus avoiding performance degradation on previous tasks.

📖
terimler

Orthogonal Gradient Descent (OGD)

Method that projects the gradient of the new task onto the space orthogonal to the gradient subspaces of previous tasks. This projection guarantees that learning new tasks does not interfere with directions important for past performance.

📖
terimler

Dynamical Expandable Networks (DEN)

Framework that dynamically expands the network by adding new units and connections when necessary, while selectively reactivating or deactivating existing connections. DEN adapts model capacity to new requirements without degrading previous performance.

📖
terimler

PackNet

Regularization technique that assigns specific neural subnetworks to each task via fixed binary masks and sparsity constraints. PackNet allows stacking multiple tasks in the same network without interference by compartmentalizing resources.

📖
terimler

HAT (Hard Attention to the Task)

Method that learns binary attention masks per task to select active network weights, thus creating dedicated paths for each task. HAT uses regularization to encourage the use of different weight subsets for different tasks.

📖
terimler

CWR (Copy Weight with Reinit)

Strategy that duplicates model weights after learning each task and selectively reinitializes certain weights for learning the new task. CWR maintains a copy of important weights while allowing adaptation for new knowledge.

📖
terimler

PathNet

Evolutionary architecture where neuron paths are selected and optimized for each specific task, using genetic algorithms to find the best combinations. PathNet allows module reuse while isolating parameters by task.

📖
terimler

Sup-Sup (Superposition of Superpositions)

Technique that combines weight superposition with task superposition to maximize network parameter utilization. Sup-Sup allows a compact network to store and execute multiple tasks simultaneously without forgetting.

🔍

Sonuç bulunamadı