🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

TabNet

Sequential neural network architecture specifically designed for tabular data, using an attentive masking mechanism to interpretable select relevant features at each decision step.

📖
용어

Attentive Masking

Mechanism at the heart of TabNet that learns to sequentially mask irrelevant features, allowing the model to focus on the most informative variables for a given prediction.

📖
용어

Sequential Feature Learning

Process by which TabNet learns a data representation through multiple steps, where each step refines the feature selection based on information from previous steps.

📖
용어

Feature Transformer

TabNet module that transforms masked input features into a higher-dimensional new representation, using fully connected layers and a GLU activation function.

📖
용어

GLU (Gated Linear Unit)

Activation function used in TabNet's Feature Transformer, which enables information flow control by multiplying a linear projection by a sigmoid gate, improving the network's ability to model complex relationships.

📖
용어

Attentive Transformer

TabNet component that generates the attention mask for the next decision step, based on the previous state and transformed features to determine where to focus attention.

📖
용어

Decision Step

Fundamental processing unit in TabNet architecture, combining a Feature Transformer and an Attentive Transformer to produce a partial output and a mask for the next step.

📖
용어

SER (Series Regression Network)

Theoretical concept that inspired TabNet, consisting of modeling a complex prediction as a series of simpler decisions, each refining the final result.

📖
용어

Embedding-based Categorical Encoding

A preprocessing technique for categorical variables in TabNet, where each category is mapped to a low-dimensional dense vector learned during training, allowing the model to capture semantic relationships between categories.

📖
용어

Batch Normalization

A layer applied in TabNet's blocks to stabilize and accelerate training by normalizing each batch's activations to a zero mean and unit variance.

📖
용어

Sparsity Regularization

A technique used in TabNet to encourage the attention mask to select only a small number of features, thereby promoting simpler and more interpretable models.

📖
용어

Variable Depth Architecture

A property of TabNet where the number of decision steps actually used can vary for each sample, as the model can learn to stop when the information is sufficient for a reliable prediction.

📖
용어

Tabular Data

A type of structured data organized in rows and columns, typical of spreadsheets or relational databases, for which TabNet is specifically optimized.

📖
용어

Robustness to Missing Features

The ability of TabNet to effectively handle missing values in the input data by learning to mask and adapt to them without requiring complex prior imputation.

📖
용어

Reinforcement Learning for Masking

A theoretical perspective where the sequential masking process in TabNet can be viewed as a decision-making process, in which the model learns a feature selection policy to maximize an accuracy reward.

📖
용어

Masked Neural Network

A class of neural networks, of which TabNet is an example, that use learned masks to dynamically select subsets of inputs or neurons, improving efficiency and interpretability.

🔍

결과를 찾을 수 없습니다