🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

MARL (Multi-Agent Reinforcement Learning)

Learning paradigm where multiple agents interact simultaneously in a shared environment, learning optimal policies individually or collectively.

📖
terimler

Centralized Training with Decentralized Execution (CTDE)

Approach where agents are trained using centralized global information but execute their policies in a decentralized manner with local observations.

📖
terimler

QMIX (Q-value Mixing)

Q-value decomposition algorithm that represents the joint Q-value as a monotonic non-linear combination of the individual agents' Q-values.

📖
terimler

VDN (Value Decomposition Networks)

Total value factorization method that decomposes the joint value into the sum of each agent's individual value in a cooperative framework.

📖
terimler

MADDPG (Multi-Agent Deep Deterministic Policy Gradient)

Extension of DDPG to multi-agent environments using centralized training with decentralized execution for mixed-motive environments.

📖
terimler

COMA (Counterfactual Multi-Agent Policy Gradients)

Algorithm that uses counterfactual baselines to estimate how individual actions affect the global reward by changing one agent's policy while keeping others fixed.

📖
terimler

Dec-POMDP (Decentralized Partially Observable Markov Decision Process)

Mathematical formalization of multi-agent sequential decision-making problems with partial observability and decentralized decision-making.

📖
terimler

Credit Assignment

Fundamental problem of determining each agent's contribution to the collective reward in cooperative multi-agent environments.

📖
terimler

Joint Action Learning

Technique where agents learn to coordinate their actions by considering the simultaneous actions of all agents in the environment.

📖
terimler

Agent Modeling

Ability of an agent to build and maintain mental models of the intentions, beliefs, and policies of other agents in the environment.

📖
terimler

Mean Field Theory in MARL

Theoretical approach dealing with large-scale multi-agent interactions by approximating collective influence through a statistical mean field.

📖
terimler

Opponent Modeling

Process of learning the strategies and behaviors of opposing agents to anticipate their actions and optimize one's own policy in competitive games.

📖
terimler

Communication Protocols in MARL

Mechanisms that enable agents to exchange information to improve coordination and collective efficiency in cooperative tasks.

📖
terimler

Cooperative MARL

Subdomain of MARL where agents share a common objective and maximize collective rewards through coordination and collaboration.

📖
terimler

Competitive MARL

Multi-agent framework where individuals or teams compete in zero-sum or non-zero-sum games to maximize their individual rewards.

📖
terimler

Mixed-Motive MARL

Multi-agent environments combining cooperative and competitive elements, where agents must balance personal interests and collective objectives.

📖
terimler

Emergent Behavior

Complex, unprogrammed behaviors that spontaneously emerge from the interaction between learning agents in a shared environment.

📖
terimler

Attention Mechanisms in MARL

Techniques allowing agents to selectively weight information from other agents or parts of the environment for better decision-making.

📖
terimler

Curriculum Learning in MARL

Training strategy progressing from simple to complex tasks to facilitate learning of robust policies in multi-agent environments.

📖
terimler

Scalability in MARL

Algorithmic challenge consisting of maintaining learning performance in the face of exponential increase in the joint action space with the number of agents.

🔍

Sonuç bulunamadı