Glosarium AI
Kamus lengkap Kecerdasan Buatan
Bidirectional Encoder
Component that processes the entire input sequence simultaneously, allowing each token to attend to all other tokens, both past and future, for complete contextual understanding.
Autoregressive Decoder
Generation mechanism where the decoder produces the output sequence token by token, based solely on previously generated tokens and the encoder's contextual representation.
Cross-Attention Mechanism
Process in the decoder that allows it to focus on specific parts of the encoder's output, weighting the importance of each input token for generating the current output token.
Causal Masking
Technique applied in the decoder to prevent each position from attending to future positions, thus ensuring the autoregressive nature of generation and preventing information leakage.
Feed-Forward Network
Fully connected neural network applied to each position independently after the attention mechanism, enabling nonlinear transformation and higher-dimensional projection.
Layer Normalization
Regularization technique that stabilizes activations by normalizing features for each individual example, accelerating convergence and improving overall model performance.
Encoder Bottleneck
Fixed-dimensional vector representation, often the final output of the encoder, that condenses all information from the input sequence and serves as a unique context for the decoder during generation.
Token Embeddings
High-dimensional dense vectors that represent each discrete token from the vocabulary in a continuous space, capturing semantic and syntactic information learned during training.