🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Memory Coalescing

GPU optimization technique where contiguous memory accesses from threads are grouped into single transactions, reducing memory bandwidth and increasing throughput.

📖
termini

Cache Blocking

Data partitioning strategy into cache-sized blocks to maximize local data reuse and minimize cache misses.

📖
termini

NUMA-Aware Allocation

Memory allocation that considers Non-Uniform Memory Access architecture to place data near the cores that frequently use them, reducing access latency.

📖
termini

Memory Pooling

Pre-allocation of a large memory block subdivided into reusable objects, eliminating the overhead of frequent dynamic allocations/deallocations.

📖
termini

Zero-Copy Optimization

Technique allowing operations to directly access data without intermediate copying between memory spaces, reducing CPU consumption and bandwidth.

📖
termini

Register Tiling

Use of processor registers to temporarily store data tiles, minimizing accesses to slower hierarchical memory.

📖
termini

Prefetching Instructions

Special instructions that preload data into cache before actual use, hiding memory latency through computation/access overlap.

📖
termini

Memory Footprint Reduction

Set of techniques (quantization, pruning, compression) aimed at reducing the memory size of AI models without significant performance degradation.

📖
termini

Shared Memory Utilization

Optimization of GPU shared memory usage as a fast and reusable data space between threads of the same block.

📖
termini

Memory Bandwidth Saturation

State where memory access demands exceed the capacity of the memory bus, becoming the main bottleneck of computing performance.

📖
termini

Page Migration

Dynamic movement of memory pages between NUMA nodes based on access patterns to optimize data locality.

📖
termini

Memory-Aware Scheduling

Task scheduling that takes into account memory constraints and access patterns to minimize contentions and maximize parallelism.

📖
termini

Cache-Oblivious Algorithms

Algorithms designed to perform efficiently on any cache hierarchy without requiring specific cache size parameters.

📖
termini

Memory Hierarchy Optimization

Global strategy for data placement according to their access frequency and temporal criticality across the levels of the memory hierarchy.

📖
termini

Tensor Core Memory Layout

Specific organization of tensors in memory to maximize the efficiency of matrix operations on NVIDIA Tensor Cores.

📖
termini

Memory Access Divergence

Phenomenon where threads in a GPU warp access non-contiguous memory addresses, degrading performance through serialization of accesses.

📖
termini

HBM (High Bandwidth Memory) Integration

3D stacked memory architecture offering superior bandwidth for intensive AI workloads, with specific optimization of access patterns.

📖
termini

Memory-Mapped I/O Optimization

Technique allowing peripheral devices to directly access system memory, reducing copies and CPU overhead in AI pipelines.

🔍

Nessun risultato trovato