KI-Glossar
Das vollständige Wörterbuch der Künstlichen Intelligenz
Encoder-Decoder Architecture
Fundamental structure of Transformer models combining an encoder that processes and understands the input sequence, and a decoder that generates the output sequence in an auto-regressive manner.
Encoder Stack
Stacking of multiple identical encoder layers, each combining multi-head attention and feed-forward network, progressively transforming input representations into richer abstractions.
Decoder Stack
Stacking of decoder layers that generate the output sequence token by token, integrating both masked self-attention and cross-attention to model temporal dependencies and input-output relationships.
Transformer Block
Fundamental computational unit combining a multi-head attention mechanism, residual connections, layer normalization, and a feed-forward network, forming the basis of encoders and decoders.