AI-woordenlijst
Het complete woordenboek van kunstmatige intelligentie
Architecture Transformer
Structure neuronale révolutionnaire basée sur des mécanismes d'attention qui constitue la fondation des LLM modernes.
Mécanismes d'Attention
Composants algorithmiques permettant aux modèles de pondérer sélectivement différentes parties du texte lors du traitement.
Fine-tuning et Adaptation
Processus d'ajustement des modèles pré-entraînés sur des données spécifiques pour des tâches ou domaines particuliers.
Prompt Engineering
Techniques d'optimisation des instructions pour guider efficacement les LLM vers des réponses désirées.
Multimodal Models
LLMs capable of simultaneously processing and generating text, images, audio, and other data formats.
Quantification and Optimization
Model size reduction techniques and computational efficiency improvement for deployment.
Evaluation and Metrics
Methodologies and indicators for measuring the performance, coherence, and security of LLMs.
Scaling Laws
Mathematical principles describing the relationship between model size, training data, and performance.
RAG - Retrieval-Augmented Generation
Approach combining external information retrieval and generation to improve the accuracy and freshness of answers.
Agent Chaining
Systems where multiple LLMs collaborate in sequence or in parallel to solve complex tasks.
Contention Mechanisms
Techniques for controlling and aligning LLM behaviors, preventing inappropriate or harmful responses.
Domain-Specific Models
LLMs specialized and optimized for particular sectors such as medicine, law, or finance.
Continual Learning
Continuous learning capabilities that enable LLMs to incorporate new knowledge without forgetting previous ones.
Low-Resource Models
Efficient training techniques to develop performant LLMs with limited data and computational resources.
Tokenization and Encoding
Process of converting text into numerical units understandable by the neural architectures of LLMs.