🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📖
terimler

TabNet

Sequential neural network architecture specifically designed for tabular data, using an attentive masking mechanism to interpretable select relevant features at each decision step.

📖
terimler

Attentive Masking

Mechanism at the heart of TabNet that learns to sequentially mask irrelevant features, allowing the model to focus on the most informative variables for a given prediction.

📖
terimler

Sequential Feature Learning

Process by which TabNet learns a data representation through multiple steps, where each step refines the feature selection based on information from previous steps.

📖
terimler

Feature Transformer

TabNet module that transforms masked input features into a higher-dimensional new representation, using fully connected layers and a GLU activation function.

📖
terimler

GLU (Gated Linear Unit)

Activation function used in TabNet's Feature Transformer, which enables information flow control by multiplying a linear projection by a sigmoid gate, improving the network's ability to model complex relationships.

📖
terimler

Attentive Transformer

TabNet component that generates the attention mask for the next decision step, based on the previous state and transformed features to determine where to focus attention.

📖
terimler

Decision Step

Fundamental processing unit in TabNet architecture, combining a Feature Transformer and an Attentive Transformer to produce a partial output and a mask for the next step.

📖
terimler

SER (Series Regression Network)

Theoretical concept that inspired TabNet, consisting of modeling a complex prediction as a series of simpler decisions, each refining the final result.

📖
terimler

Embedding-based Categorical Encoding

A preprocessing technique for categorical variables in TabNet, where each category is mapped to a low-dimensional dense vector learned during training, allowing the model to capture semantic relationships between categories.

📖
terimler

Batch Normalization

A layer applied in TabNet's blocks to stabilize and accelerate training by normalizing each batch's activations to a zero mean and unit variance.

📖
terimler

Sparsity Regularization

A technique used in TabNet to encourage the attention mask to select only a small number of features, thereby promoting simpler and more interpretable models.

📖
terimler

Variable Depth Architecture

A property of TabNet where the number of decision steps actually used can vary for each sample, as the model can learn to stop when the information is sufficient for a reliable prediction.

📖
terimler

Tabular Data

A type of structured data organized in rows and columns, typical of spreadsheets or relational databases, for which TabNet is specifically optimized.

📖
terimler

Robustness to Missing Features

The ability of TabNet to effectively handle missing values in the input data by learning to mask and adapt to them without requiring complex prior imputation.

📖
terimler

Reinforcement Learning for Masking

A theoretical perspective where the sequential masking process in TabNet can be viewed as a decision-making process, in which the model learns a feature selection policy to maximize an accuracy reward.

📖
terimler

Masked Neural Network

A class of neural networks, of which TabNet is an example, that use learned masks to dynamically select subsets of inputs or neurons, improving efficiency and interpretability.

🔍

Sonuç bulunamadı