🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

Vision Transformer (ViT)

Neural architecture applying Transformer mechanisms to image processing by dividing images into sequences of patches for sequential processing.

📖
termer

Patch Embedding

Process of converting image patches into fixed-dimensional embedding vectors through linear projection to feed into the Transformer.

📖
termer

Class Token

Special token added to the embedding sequence whose final representation after passing through the Transformer is used for image classification.

📖
termer

Multi-Head Self-Attention

Mechanism allowing the model to simultaneously compute multiple attention representations to capture different relationships between image patches.

📖
termer

Transformer Encoder

Fundamental block composed of self-attention layers and feed-forward networks alternating with normalization and residual connections.

📖
termer

Image Patch Tokenization

Process of cutting an image into non-overlapping fixed-size patches, typically 16x16 pixels, which are then converted into sequential tokens.

📖
termer

Attention Map Visualization

Interpretability technique visualizing attention weights between patches to understand which image regions the model focuses on.

📖
termer

Pre-training on Large Datasets

Initial training phase on millions of images like ImageNet-21k to learn general visual representations before fine-tuning.

📖
termer

Patch Size Hyperparameter

Crucial parameter defining the dimension of image patches directly influencing computational complexity and model performance.

📖
termer

Token-to-Patch Reconstruction

Reverse process in generative tasks where tokens are converted back into image patches to reconstruct the original image.

📖
termer

Hierarchical Vision Transformer

Variant of ViT using a pyramid structure with variable patch sizes to capture multi-scale features.

📖
termer

Self-Supervised ViT Pre-training

Unsupervised training methods like DINO or MAE leveraging the Transformer structure to learn without annotations.

📖
termer

Cross-Attention in Multi-Modal ViT

Mechanism extending ViT to jointly process images and text using attention between different modalities.

📖
termer

Computational Complexity O(n²)

Quadratic complexity of self-attention with respect to the number of patches constituting the main limitation of Vision Transformers.

🔍

Inga resultat hittades