AI-woordenlijst
Het complete woordenboek van kunstmatige intelligentie
Multi-task CNN
Convolutional neural network architecture designed to simultaneously perform multiple computer vision tasks by sharing learned representations between different outputs.
Parameter sharing
Technique where network weights are reused across different tasks to reduce computational complexity and improve generalization through joint learning.
Multiple output branches
Architectural structure featuring multiple specialized classification or regression heads, each dedicated to a specific task while sharing a common feature extraction backbone.
Multi-task learning
Learning paradigm where a model is trained simultaneously on multiple tasks to benefit from positive transfer effects and improve overall performance.
Shared architecture
Network design where lower layers extract common features shared across tasks, while upper layers specialize for specific predictions.
Multi-task encoder-decoder
Bidirectional structure where the encoder extracts shared features and multiple decoders generate specialized predictions for different vision tasks.
Feature Pyramid Network (FPN)
Hierarchical architecture generating multi-scale feature maps shared across tasks, enabling capture of both fine details and global semantic context.
Multi-task attention
Task-conditioned attention mechanism that dynamically modifies feature representations to optimize their relevance for each specific output.
Multi-task weighted loss
Loss function combining multiple loss terms with adaptive or fixed weights to balance the relative influence of each task during training.
Auxiliary tasks
Secondary tasks added to improve the learning of the main task by providing additional regularization signals and more robust representations.
Multi-task knowledge transfer
Phenomenon where the joint learning of multiple tasks enables beneficial knowledge transfer, improving individual performance compared to isolated learning.
Feature fusion
Process of combining features from different layers or modalities to create enriched representations suitable for the simultaneous execution of multiple tasks.
Task decoupling
Architectural approach that explicitly separates task-specific parameters while maintaining a shared backbone to optimize individual performance.
Dynamic routing
Adaptive mechanism that conditionally selects computational paths in the network based on the specific requirements of each task to optimize efficiency.
Universal representations
Generic features learned on multiple tasks and transferable to new tasks with minimal adaptation, promoting robust generalization.
Multi-task fine-tuning
Process of progressively adapting a pre-trained model on multiple tasks simultaneously to optimize performance on a target set of applications.
Gradient balancing
Optimization technique that dynamically adjusts the contribution of each task to update gradients to prevent one task from dominating others.
Multi-task normalization
Normalization strategies adapted to multi-task architectures, handling distribution differences between tasks to stabilize joint training.
Cross-stitch Networks
Multi-task architecture using cross-stitch units to linearly combine activations from specialized networks, enabling flexible collaborative learning.