🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

Contextual Bandit

Reinforcement learning algorithm that dynamically selects the best actions based on the observed context to maximize cumulative rewards.

📖
termer

Exploration vs Exploitation

Fundamental dilemma where the algorithm must balance discovering new options and exploiting options known to be performant.

📖
termer

Upper Confidence Bound (UCB)

Strategy that selects arms based on an upper confidence bound on their expected reward, favoring the exploration of uncertain actions.

📖
termer

Thompson Sampling

Bayesian algorithm that samples reward parameters from their posterior distribution to make probabilistic decisions.

📖
termer

LinUCB

Extension of UCB that models expected reward as a linear function of context, adapted for high-dimensional context spaces.

📖
termer

Context Features

Descriptive variables that characterize the current state of the environment and influence the optimal choice of action in contextual bandits.

📖
termer

Regret Minimization

Objective aimed at minimizing the difference between the cumulative reward obtained and that of the optimal policy, measuring the performance of the algorithm.

📖
termer

Multi-armed Bandits

Fundamental problem where an agent must select among several options (arms) with unknown reward distributions to maximize gain.

📖
termer

Reward Function

Mathematical function that quantifies the immediate return obtained after taking an action in a given context, guiding the algorithm's learning.

📖
termer

Arm Selection

Process of choosing the optimal action among available options based on current reward estimates and the observed context.

📖
termer

Expected Reward

Anticipated average value of the reward for a given action in a specific context, calculated from historical observations.

📖
termer

Action-Value Function

Function Q(a,x) that estimates the expected future reward by taking action 'a' in context 'x', fundamental for policy evaluation.

📖
termer

Online Learning

Learning paradigm where the model continuously adjusts as new data arrives, without requiring a full retraining.

📖
termer

Stochastic Contextual Bandits

Variant where rewards follow independent and identically distributed stochastic distributions for each context-action pair.

📖
termer

Neural Bandits

Approach using neural networks to approximate the value function or policy, capable of capturing complex non-linear relationships.

🔍

Inga resultat hittades