Glossario IA
Il dizionario completo dell'Intelligenza Artificiale
Transformer Architecture
Neural structure based on attention mechanisms that enables parallel processing of text sequences
Tokenization
Process of segmenting text into discrete units (tokens) for processing by models
Fine-tuning
Specialized adaptation of a pre-trained model on specific data for targeted tasks
Prompt Engineering
Design optimization of instructions to effectively guide language model responses
Attention Mechanism
System allowing the model to weight the importance of different parts of the text during processing
Language Embeddings
Dense vector representations that capture the semantic meaning of words and phrases
Transfer Learning
Reusing knowledge acquired from a large corpus for specific tasks with limited data
Scaling Laws
Mathematical principles describing performance improvement with increasing model size
Zero-shot Learning
Ability of models to perform tasks never seen during training without examples
Multimodal Models
LLM integrating text, image, audio, and other modalities within a unified framework
Quantization
Reducing the numerical precision of model weights to optimize inference and storage
RAG (Retrieval-Augmented Generation)
Combination of external information retrieval with generation to improve answer accuracy
Alignment and Safety
Techniques to ensure that models respect human values and avoid harmful behaviors
Autoregressive Models
Generative architecture predicting the next token based on all previous tokens
LoRA (Low-Rank Adaptation)
Memory-efficient fine-tuning method using low-rank matrices