KI-Glossar
Das vollständige Wörterbuch der Künstlichen Intelligenz
Progressive Networks
Neural network architecture that adds new columns for each new task while preserving knowledge from previous tasks. This approach enables continual learning without catastrophic forgetting.
Lateral Connections
Connections that link columns from previous tasks to new columns in the Progressive Networks architecture. They enable knowledge transfer and reuse of learned features.
Column-wise Architecture
Organizational structure where each task has its own distinct but interconnected neural column. This architecture facilitates model expansion for new tasks without disrupting existing knowledge.
Forward Transfer
Phenomenon where knowledge acquired from previous tasks improves performance on new tasks. This positive transfer is optimized by lateral connections in Progressive Networks.
Backward Transfer
Ability of a model to retroactively improve performance on previous tasks after learning new tasks. Progressive Networks facilitate this bidirectional knowledge transfer.
Knowledge Preservation
Mechanism that ensures knowledge acquired from previous tasks is not degraded when learning new tasks. This preservation is fundamental in continual learning.
Task-specific Adapters
Specialized modules inserted in the architecture to adapt general features to the specific requirements of each task. These adapters allow optimal flexibility while preserving shared knowledge.
Capacity Expansion
Strategy for dynamically increasing model capacity by adding new neural columns to accommodate new tasks. This controlled expansion avoids resource saturation.
Neural Plasticity
The ability of neural networks to adapt and modify their connections in response to new information. In Progressive Networks, this plasticity is controlled to preserve existing knowledge.
Synaptic Intelligence
Continuous learning method that identifies and protects synaptic connections important for previous tasks. This synaptic intelligence is integrated into knowledge preservation mechanisms.
Elastic Weight Consolidation
Regularization technique that penalizes significant changes in synaptic weights crucial for previous tasks. This elastic approach allows a compromise between learning and preservation.
Memory Aware Synapses
Approach that evaluates the importance of each synapse based on its contribution to past tasks to guide future learning. This mnemonic awareness optimizes knowledge transfer.
Gradient Episodic Memory
Mechanism for storing and retrieving examples from previous tasks to counter catastrophic forgetting. This episodic memory guides gradients when learning new tasks.
Dynamic Architecture Expansion
Process of automatically adding new neural resources when the model reaches its maximum capacity. This dynamic expansion is essential for continuous learning systems.
Multi-task Representation
Shared feature space capturing common information across multiple tasks simultaneously. These multi-task representations are optimized in the Progressive Networks architecture.