Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
LSTM (Long Short-Term Memory)
Advanced RNN architecture with forget, input and output gates to control long-term information flow.
GRU (Gated Recurrent Unit)
A simplified variant of LSTM that combines the forget and input gates into a single update gate.
Bidirectional RNN
RNN architecture that processes sequences in both temporal directions to capture the complete context.
Deep RNN
Recurrent neural networks stacked in multiple layers to learn complex hierarchical representations.
Attention Mechanism
Technique that allows the model to selectively weight relevant parts of the input sequence.
Seq2Seq (Sequence-to-Sequence)
Encoder-decoder architecture that transforms a variable-length sequence into another sequence.
Echo State Networks
RNN with random fixed reservoir where only the output layer learning is performed.
Neural Turing Machines
Extension of RNNs incorporating addressable external memory for complex algorithmic tasks.
Transformers
Attention-only architecture, often replacing RNNs for sequence processing.
Temporal Convolutional Networks
Alternative to RNNs using dilated causal convolutions to model temporal sequences.
Hierarchical RNN
Multi-level RNN structure processing hierarchical sequences like sentences in paragraphs.
RNN with Dropout
RNN-specific regularization techniques like variational dropout and zoneout to prevent overfitting.