🏠 Startseite
Vergleiche
📊 Alle Benchmarks 🦖 Dinosaurier v1 🦖 Dinosaurier v2 ✅ To-Do-Listen-Apps 🎨 Kreative freie Seiten 🎯 FSACB - Ultimatives Showcase 🌍 Übersetzungs-Benchmark
Modelle
🏆 Top 10 Modelle 🆓 Kostenlose Modelle 📋 Alle Modelle ⚙️ Kilo Code
Ressourcen
💬 Prompt-Bibliothek 📖 KI-Glossar 🔗 Nützliche Links

KI-Glossar

Das vollständige Wörterbuch der Künstlichen Intelligenz

162
Kategorien
2.032
Unterkategorien
23.060
Begriffe
📖
Begriffe

Vision Transformer (ViT)

Neural architecture applying Transformer mechanisms to image processing by dividing images into sequences of patches for sequential processing.

📖
Begriffe

Patch Embedding

Process of converting image patches into fixed-dimensional embedding vectors through linear projection to feed into the Transformer.

📖
Begriffe

Class Token

Special token added to the embedding sequence whose final representation after passing through the Transformer is used for image classification.

📖
Begriffe

Multi-Head Self-Attention

Mechanism allowing the model to simultaneously compute multiple attention representations to capture different relationships between image patches.

📖
Begriffe

Transformer Encoder

Fundamental block composed of self-attention layers and feed-forward networks alternating with normalization and residual connections.

📖
Begriffe

Image Patch Tokenization

Process of cutting an image into non-overlapping fixed-size patches, typically 16x16 pixels, which are then converted into sequential tokens.

📖
Begriffe

Attention Map Visualization

Interpretability technique visualizing attention weights between patches to understand which image regions the model focuses on.

📖
Begriffe

Pre-training on Large Datasets

Initial training phase on millions of images like ImageNet-21k to learn general visual representations before fine-tuning.

📖
Begriffe

Patch Size Hyperparameter

Crucial parameter defining the dimension of image patches directly influencing computational complexity and model performance.

📖
Begriffe

Token-to-Patch Reconstruction

Reverse process in generative tasks where tokens are converted back into image patches to reconstruct the original image.

📖
Begriffe

Hierarchical Vision Transformer

Variant of ViT using a pyramid structure with variable patch sizes to capture multi-scale features.

📖
Begriffe

Self-Supervised ViT Pre-training

Unsupervised training methods like DINO or MAE leveraging the Transformer structure to learn without annotations.

📖
Begriffe

Cross-Attention in Multi-Modal ViT

Mechanism extending ViT to jointly process images and text using attention between different modalities.

📖
Begriffe

Computational Complexity O(n²)

Quadratic complexity of self-attention with respect to the number of patches constituting the main limitation of Vision Transformers.

🔍

Keine Ergebnisse gefunden