🏠 Ana Sayfa
Benchmarklar
📊 Tüm Benchmarklar 🦖 Dinozor v1 🦖 Dinozor v2 ✅ To-Do List Uygulamaları 🎨 Yaratıcı Serbest Sayfalar 🎯 FSACB - Nihai Gösteri 🌍 Çeviri Benchmarkı
Modeller
🏆 En İyi 10 Model 🆓 Ücretsiz Modeller 📋 Tüm Modeller ⚙️ Kilo Code
Kaynaklar
💬 Prompt Kütüphanesi 📖 YZ Sözlüğü 🔗 Faydalı Bağlantılar

YZ Sözlüğü

Yapay Zekanın tam sözlüğü

162
kategoriler
2.032
alt kategoriler
23.060
terimler
📂
alt kategoriler

Self-Attention

Fundamental mechanism allowing transformers to dynamically compute the relative importance of each element in a sequence compared to others.

2 terimler
📂
alt kategoriler

Multi-Head Attention

Extension of self-attention where multiple attention heads operate in parallel to capture different types of relationships in the data.

4 terimler
📂
alt kategoriler

Positional Encoding

Technique that incorporates sequential position information into embeddings to compensate for the absence of recurrence in transformers.

6 terimler
📂
alt kategoriler

Encoder-Decoder Architecture

Fundamental structure of original transformers combining an encoder to process input and a decoder to generate output.

8 terimler
📂
alt kategoriler

BERT (Bidirectional Encoder Representations)

Family of pre-trained models based on the encoder-only architecture with bidirectional context understanding.

10 terimler
📂
alt kategoriler

GPT (Generative Pre-trained Transformer)

Decoder-only architecture optimized for autoregressive text generation, forming the basis of large language models.

5 terimler
📂
alt kategoriler

Vision Transformers (ViT)

Application of transformer architectures to image processing by dividing images into patches and treating them as sequences.

11 terimler
📂
alt kategoriler

Sparse Attention Mechanisms

Variants of attention reducing computational complexity by limiting connections between sequence elements.

2 terimler
📂
alt kategoriler

Cross-Attention

Attention mechanism where queries come from one sequence while keys and values come from a different sequence.

2 terimler
📂
alt kategoriler

Transformer Scaling Laws

Empirical principles describing how transformer performance evolves with model size, data, and computation.

18 terimler
📂
alt kategoriler

Attention Head Analysis

Study of the specialized roles of different attention heads in transformers to understand their internal functioning.

19 terimler
📂
alt kategoriler

Hierarchical Attention

Hierarchical attention architecture organized across multiple levels to process complex structured data.

9 terimler
🔍

Sonuç bulunamadı