AI-ordlista
Den kompletta ordlistan över AI
Model-Based Deep Reinforcement Learning
Approach to reinforcement learning where the agent builds an internal model of the environment to simulate and plan its actions, thus reducing the need for real interactions with the environment.
World Model
Complete neural representation of the environment that simultaneously learns system dynamics, latent states, and rewards to enable the agent to reason in a simulated space.
Model Predictive Control (MPC)
Control strategy using the learned model to optimize a sequence of future actions over a limited time horizon, continuously re-evaluating the optimal plan at each time step.
Dyna Architecture
Framework integrating direct and indirect reinforcement learning, where simulated experiences generated by the model complement real data to accelerate learning.
Imagination-Augmented Agents (I2A)
Agent architecture combining standard policy with an imagination trajectory using the environment model to anticipate future consequences before making a decision.
PlaNet
Algorithm learning a dynamics model in a compact latent space to solve continuous control tasks entirely through planning, without an explicit policy.
Dreamer
Agent learning a world in dream space where it trains on imagined trajectories to learn behaviors and values in a completely latent manner.
MuZero
Revolutionary algorithm simultaneously learning the model, policy, and value without prior knowledge of the environment's rules, combining MCTS and deep learning.
Latent Space Dynamics
Modeling the temporal evolution of states in a compressed representation space where dynamics are simpler and more stable than in the raw observation space.
Model Uncertainty
Quantification of the environmental model's uncertainty, crucial for identifying areas where the model is reliable and those requiring more exploration or real interactions.
Model Ensemble
Technique using multiple independent environmental models to estimate epistemic uncertainty and improve prediction robustness for planning.
Planning with Learned Models
Sequential search process using the learned model to evaluate different future action sequences and select the optimum according to reward predictions.
Model-Based Value Expansion (MVE)
Technique using the model to extrapolate returns beyond the real horizon, combining real and simulated data to more accurately estimate long-term values.
Model-Based Policy Optimization (MBPO)
Hybrid algorithm using short-range models to generate synthetic data while maintaining a set of real data to stabilize policy learning.
Trajectory Optimization
Direct optimization of state-action sequences using the model's gradient to find optimal trajectories, particularly effective for continuous systems.
Differentiable Physics Engines
Physics simulators implemented with differentiable operations allowing gradient propagation through simulations for model-based reinforcement learning.
Forward Dynamics Model
Predictive model that learns the state transition s_{t+1} = f(s_t, a_t) to anticipate future consequences of actions in the environment.
Inverse Dynamics Model
Model that learns to infer the action a_t = f^{-1}(s_t, s_{t+1}) that led from one state to another, useful for imitation learning and action representation.
Model-Based Exploration
Exploration strategy that uses model uncertainty to guide the agent towards states where the model is less confident, promoting the learning of a more complete representation.