Słownik AI
Kompletny słownik sztucznej inteligencji
Sparse Transformer
Variant using predictive sparse attention patterns to reduce computational connections while capturing long-range dependencies. The architecture factorizes attention into subsets to optimize processing.
Compressive Transformer
Extension of Transformer-XL that compresses old hidden memories into denser vectors to preserve long-term history. This compression enables efficient storage of extensive contextual information.
Universal Transformer
Adaptive architecture where depth is dynamically determined by an adaptive halting mechanism rather than fixed. Universal Transformer iteratively applies shared-weight transformations with adaptive attention.
Set Transformer
Permutation-invariant architecture based on attention to process data sets without predefined order. Set Transformer uses induced attention blocks and pooling mechanisms for set operations.
Synthesizer
Variant where attention weights are learned directly from position embeddings or generated by small networks, without depending on token content. This approach eliminates the need for QK similarity computations.
Linear Transformer
Architecture using kernelized decomposition of attention to achieve linear complexity in sequence and memory. Linear Transformer replaces softmax with positive kernel functions to enable associative reordering.
Local Attention
Attention mechanism restricted to local neighborhoods around each position, drastically reducing the number of token pairs to consider. This approach is particularly effective for data with strong local structure.
Dilated Attention
Extension of sliding window attention using dilated patterns to capture longer-range dependencies without increasing complexity. The gaps in the pattern allow exponential expansion of the receptive field.
Axial Attention
Decomposition of multidimensional attention into unidimensional attentions applied sequentially on each axis. Axial attention reduces the complexity from O(n²) to O(n*d) where d is the number of dimensions.