🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Model-Based Deep Reinforcement Learning

Approach to reinforcement learning where the agent builds an internal model of the environment to simulate and plan its actions, thus reducing the need for real interactions with the environment.

📖
terms

World Model

Complete neural representation of the environment that simultaneously learns system dynamics, latent states, and rewards to enable the agent to reason in a simulated space.

📖
terms

Model Predictive Control (MPC)

Control strategy using the learned model to optimize a sequence of future actions over a limited time horizon, continuously re-evaluating the optimal plan at each time step.

📖
terms

Dyna Architecture

Framework integrating direct and indirect reinforcement learning, where simulated experiences generated by the model complement real data to accelerate learning.

📖
terms

Imagination-Augmented Agents (I2A)

Agent architecture combining standard policy with an imagination trajectory using the environment model to anticipate future consequences before making a decision.

📖
terms

PlaNet

Algorithm learning a dynamics model in a compact latent space to solve continuous control tasks entirely through planning, without an explicit policy.

📖
terms

Dreamer

Agent learning a world in dream space where it trains on imagined trajectories to learn behaviors and values in a completely latent manner.

📖
terms

MuZero

Revolutionary algorithm simultaneously learning the model, policy, and value without prior knowledge of the environment's rules, combining MCTS and deep learning.

📖
terms

Latent Space Dynamics

Modeling the temporal evolution of states in a compressed representation space where dynamics are simpler and more stable than in the raw observation space.

📖
terms

Model Uncertainty

Quantification of the environmental model's uncertainty, crucial for identifying areas where the model is reliable and those requiring more exploration or real interactions.

📖
terms

Model Ensemble

Technique using multiple independent environmental models to estimate epistemic uncertainty and improve prediction robustness for planning.

📖
terms

Planning with Learned Models

Sequential search process using the learned model to evaluate different future action sequences and select the optimum according to reward predictions.

📖
terms

Model-Based Value Expansion (MVE)

Technique using the model to extrapolate returns beyond the real horizon, combining real and simulated data to more accurately estimate long-term values.

📖
terms

Model-Based Policy Optimization (MBPO)

Hybrid algorithm using short-range models to generate synthetic data while maintaining a set of real data to stabilize policy learning.

📖
terms

Trajectory Optimization

Direct optimization of state-action sequences using the model's gradient to find optimal trajectories, particularly effective for continuous systems.

📖
terms

Differentiable Physics Engines

Physics simulators implemented with differentiable operations allowing gradient propagation through simulations for model-based reinforcement learning.

📖
terms

Forward Dynamics Model

Predictive model that learns the state transition s_{t+1} = f(s_t, a_t) to anticipate future consequences of actions in the environment.

📖
terms

Inverse Dynamics Model

Model that learns to infer the action a_t = f^{-1}(s_t, s_{t+1}) that led from one state to another, useful for imitation learning and action representation.

📖
terms

Model-Based Exploration

Exploration strategy that uses model uncertainty to guide the agent towards states where the model is less confident, promoting the learning of a more complete representation.

🔍

No results found