🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Vision Transformer (ViT)

Neural architecture applying Transformer mechanisms to image processing by dividing images into sequences of patches for sequential processing.

📖
용어

Patch Embedding

Process of converting image patches into fixed-dimensional embedding vectors through linear projection to feed into the Transformer.

📖
용어

Class Token

Special token added to the embedding sequence whose final representation after passing through the Transformer is used for image classification.

📖
용어

Multi-Head Self-Attention

Mechanism allowing the model to simultaneously compute multiple attention representations to capture different relationships between image patches.

📖
용어

Transformer Encoder

Fundamental block composed of self-attention layers and feed-forward networks alternating with normalization and residual connections.

📖
용어

Image Patch Tokenization

Process of cutting an image into non-overlapping fixed-size patches, typically 16x16 pixels, which are then converted into sequential tokens.

📖
용어

Attention Map Visualization

Interpretability technique visualizing attention weights between patches to understand which image regions the model focuses on.

📖
용어

Pre-training on Large Datasets

Initial training phase on millions of images like ImageNet-21k to learn general visual representations before fine-tuning.

📖
용어

Patch Size Hyperparameter

Crucial parameter defining the dimension of image patches directly influencing computational complexity and model performance.

📖
용어

Token-to-Patch Reconstruction

Reverse process in generative tasks where tokens are converted back into image patches to reconstruct the original image.

📖
용어

Hierarchical Vision Transformer

Variant of ViT using a pyramid structure with variable patch sizes to capture multi-scale features.

📖
용어

Self-Supervised ViT Pre-training

Unsupervised training methods like DINO or MAE leveraging the Transformer structure to learn without annotations.

📖
용어

Cross-Attention in Multi-Modal ViT

Mechanism extending ViT to jointly process images and text using attention between different modalities.

📖
용어

Computational Complexity O(n²)

Quadratic complexity of self-attention with respect to the number of patches constituting the main limitation of Vision Transformers.

🔍

결과를 찾을 수 없습니다