AI-ordlista
Den kompletta ordlistan över AI
Task Parallelization
Technique involving the simultaneous execution of multiple independent tasks on different processors to accelerate the resolution of complex problems.
Distributed Computing
Architecture where multiple computers collaborate via a network to collectively solve an optimization problem by sharing the workload.
Parallel Metaheuristics
Nature-inspired optimization algorithms whose operations are executed simultaneously on multiple computing units to more effectively explore the search space.
Parallel Simulated Annealing
Parallel approach to simulated annealing where multiple Markov chains evolve simultaneously with different temperature parameters to accelerate convergence.
Parallel Ant Colony Optimization
Parallel extension of ACO where multiple ant colonies independently explore the solution space and periodically share their best pheromone trails.
Parallel Tabu Search
Parallelized tabu search technique where multiple neighborhoods are explored simultaneously or where the tabu list is distributed among processors.
Domain Partitioning
Decomposition strategy involving dividing the problem domain into subdomains assigned to different processors working in parallel.
Dynamic Load Balancing
Adaptive mechanism redistributing tasks among processors during execution to optimize resource utilization and minimize waiting times.
Inter-process communication
Exchange of information and synchronization between different processes executed in parallel, essential for coordinating distributed optimization algorithms.
Master-slave architecture
Parallelization model where a master process distributes tasks and collects results, while slave processes execute calculations locally.
Island model
Parallelization approach where multiple subpopulations evolve in isolation on different islands with periodic migrations of individuals between islands.
Data parallelism
Strategy executing the same operation simultaneously on different portions of a large dataset, ideal for mass evaluation of solutions.
Functional parallelism
Type of parallelization where different independent functions or operations execute simultaneously on separate computing units.
Asynchronous algorithms
Optimization methods where processors work without strict synchronization, using available information without waiting for other processes.
Decomposition methods
Techniques dividing a complex optimization problem into simpler subproblems solved in parallel before recombining partial solutions.
Parallel multi-objective optimization
Parallelized approach for solving optimization problems with multiple conflicting objectives by simultaneously exploring different regions of the Pareto front.
GPU Computing
Use of massively parallel graphics processors to accelerate optimization calculations thanks to their SIMD (Single Instruction Multiple Data) architecture.
MapReduce Paradigm
Distributed programming model dividing processing into Map (parallelization) and Reduce (aggregation) phases to handle large volumes of optimization data.
MPI (Message Passing Interface)
Communication standard for distributed systems enabling message exchange between parallel processes in high-performance optimization algorithms.