🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Contextual Bandit

Reinforcement learning algorithm that dynamically selects the best actions based on the observed context to maximize cumulative rewards.

📖
용어

Exploration vs Exploitation

Fundamental dilemma where the algorithm must balance discovering new options and exploiting options known to be performant.

📖
용어

Upper Confidence Bound (UCB)

Strategy that selects arms based on an upper confidence bound on their expected reward, favoring the exploration of uncertain actions.

📖
용어

Thompson Sampling

Bayesian algorithm that samples reward parameters from their posterior distribution to make probabilistic decisions.

📖
용어

LinUCB

Extension of UCB that models expected reward as a linear function of context, adapted for high-dimensional context spaces.

📖
용어

Context Features

Descriptive variables that characterize the current state of the environment and influence the optimal choice of action in contextual bandits.

📖
용어

Regret Minimization

Objective aimed at minimizing the difference between the cumulative reward obtained and that of the optimal policy, measuring the performance of the algorithm.

📖
용어

Multi-armed Bandits

Fundamental problem where an agent must select among several options (arms) with unknown reward distributions to maximize gain.

📖
용어

Reward Function

Mathematical function that quantifies the immediate return obtained after taking an action in a given context, guiding the algorithm's learning.

📖
용어

Arm Selection

Process of choosing the optimal action among available options based on current reward estimates and the observed context.

📖
용어

Expected Reward

Anticipated average value of the reward for a given action in a specific context, calculated from historical observations.

📖
용어

Action-Value Function

Function Q(a,x) that estimates the expected future reward by taking action 'a' in context 'x', fundamental for policy evaluation.

📖
용어

Online Learning

Learning paradigm where the model continuously adjusts as new data arrives, without requiring a full retraining.

📖
용어

Stochastic Contextual Bandits

Variant where rewards follow independent and identically distributed stochastic distributions for each context-action pair.

📖
용어

Neural Bandits

Approach using neural networks to approximate the value function or policy, capable of capturing complex non-linear relationships.

🔍

결과를 찾을 수 없습니다