🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

TabNet

Sequential neural network architecture specifically designed for tabular data, using an attentive masking mechanism to interpretable select relevant features at each decision step.

📖
termer

Attentive Masking

Mechanism at the heart of TabNet that learns to sequentially mask irrelevant features, allowing the model to focus on the most informative variables for a given prediction.

📖
termer

Sequential Feature Learning

Process by which TabNet learns a data representation through multiple steps, where each step refines the feature selection based on information from previous steps.

📖
termer

Feature Transformer

TabNet module that transforms masked input features into a higher-dimensional new representation, using fully connected layers and a GLU activation function.

📖
termer

GLU (Gated Linear Unit)

Activation function used in TabNet's Feature Transformer, which enables information flow control by multiplying a linear projection by a sigmoid gate, improving the network's ability to model complex relationships.

📖
termer

Attentive Transformer

TabNet component that generates the attention mask for the next decision step, based on the previous state and transformed features to determine where to focus attention.

📖
termer

Decision Step

Fundamental processing unit in TabNet architecture, combining a Feature Transformer and an Attentive Transformer to produce a partial output and a mask for the next step.

📖
termer

SER (Series Regression Network)

Theoretical concept that inspired TabNet, consisting of modeling a complex prediction as a series of simpler decisions, each refining the final result.

📖
termer

Embedding-based Categorical Encoding

A preprocessing technique for categorical variables in TabNet, where each category is mapped to a low-dimensional dense vector learned during training, allowing the model to capture semantic relationships between categories.

📖
termer

Batch Normalization

A layer applied in TabNet's blocks to stabilize and accelerate training by normalizing each batch's activations to a zero mean and unit variance.

📖
termer

Sparsity Regularization

A technique used in TabNet to encourage the attention mask to select only a small number of features, thereby promoting simpler and more interpretable models.

📖
termer

Variable Depth Architecture

A property of TabNet where the number of decision steps actually used can vary for each sample, as the model can learn to stop when the information is sufficient for a reliable prediction.

📖
termer

Tabular Data

A type of structured data organized in rows and columns, typical of spreadsheets or relational databases, for which TabNet is specifically optimized.

📖
termer

Robustness to Missing Features

The ability of TabNet to effectively handle missing values in the input data by learning to mask and adapt to them without requiring complex prior imputation.

📖
termer

Reinforcement Learning for Masking

A theoretical perspective where the sequential masking process in TabNet can be viewed as a decision-making process, in which the model learns a feature selection policy to maximize an accuracy reward.

📖
termer

Masked Neural Network

A class of neural networks, of which TabNet is an example, that use learned masks to dynamically select subsets of inputs or neurons, improving efficiency and interpretability.

🔍

Inga resultat hittades