Słownik AI
Kompletny słownik sztucznej inteligencji
MARL (Multi-Agent Reinforcement Learning)
Learning paradigm where multiple agents interact simultaneously in a shared environment, learning optimal policies individually or collectively.
Centralized Training with Decentralized Execution (CTDE)
Approach where agents are trained using centralized global information but execute their policies in a decentralized manner with local observations.
QMIX (Q-value Mixing)
Q-value decomposition algorithm that represents the joint Q-value as a monotonic non-linear combination of the individual agents' Q-values.
VDN (Value Decomposition Networks)
Total value factorization method that decomposes the joint value into the sum of each agent's individual value in a cooperative framework.
MADDPG (Multi-Agent Deep Deterministic Policy Gradient)
Extension of DDPG to multi-agent environments using centralized training with decentralized execution for mixed-motive environments.
COMA (Counterfactual Multi-Agent Policy Gradients)
Algorithm that uses counterfactual baselines to estimate how individual actions affect the global reward by changing one agent's policy while keeping others fixed.
Dec-POMDP (Decentralized Partially Observable Markov Decision Process)
Mathematical formalization of multi-agent sequential decision-making problems with partial observability and decentralized decision-making.
Credit Assignment
Fundamental problem of determining each agent's contribution to the collective reward in cooperative multi-agent environments.
Joint Action Learning
Technique where agents learn to coordinate their actions by considering the simultaneous actions of all agents in the environment.
Agent Modeling
Ability of an agent to build and maintain mental models of the intentions, beliefs, and policies of other agents in the environment.
Mean Field Theory in MARL
Theoretical approach dealing with large-scale multi-agent interactions by approximating collective influence through a statistical mean field.
Opponent Modeling
Process of learning the strategies and behaviors of opposing agents to anticipate their actions and optimize one's own policy in competitive games.
Communication Protocols in MARL
Mechanisms that enable agents to exchange information to improve coordination and collective efficiency in cooperative tasks.
Cooperative MARL
Subdomain of MARL where agents share a common objective and maximize collective rewards through coordination and collaboration.
Competitive MARL
Multi-agent framework where individuals or teams compete in zero-sum or non-zero-sum games to maximize their individual rewards.
Mixed-Motive MARL
Multi-agent environments combining cooperative and competitive elements, where agents must balance personal interests and collective objectives.
Emergent Behavior
Complex, unprogrammed behaviors that spontaneously emerge from the interaction between learning agents in a shared environment.
Attention Mechanisms in MARL
Techniques allowing agents to selectively weight information from other agents or parts of the environment for better decision-making.
Curriculum Learning in MARL
Training strategy progressing from simple to complex tasks to facilitate learning of robust policies in multi-agent environments.
Scalability in MARL
Algorithmic challenge consisting of maintaining learning performance in the face of exponential increase in the joint action space with the number of agents.