🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Memory Coalescing

GPU optimization technique where contiguous memory accesses from threads are grouped into single transactions, reducing memory bandwidth and increasing throughput.

📖
용어

Cache Blocking

Data partitioning strategy into cache-sized blocks to maximize local data reuse and minimize cache misses.

📖
용어

NUMA-Aware Allocation

Memory allocation that considers Non-Uniform Memory Access architecture to place data near the cores that frequently use them, reducing access latency.

📖
용어

Memory Pooling

Pre-allocation of a large memory block subdivided into reusable objects, eliminating the overhead of frequent dynamic allocations/deallocations.

📖
용어

Zero-Copy Optimization

Technique allowing operations to directly access data without intermediate copying between memory spaces, reducing CPU consumption and bandwidth.

📖
용어

Register Tiling

Use of processor registers to temporarily store data tiles, minimizing accesses to slower hierarchical memory.

📖
용어

Prefetching Instructions

Special instructions that preload data into cache before actual use, hiding memory latency through computation/access overlap.

📖
용어

Memory Footprint Reduction

Set of techniques (quantization, pruning, compression) aimed at reducing the memory size of AI models without significant performance degradation.

📖
용어

Shared Memory Utilization

Optimization of GPU shared memory usage as a fast and reusable data space between threads of the same block.

📖
용어

Memory Bandwidth Saturation

State where memory access demands exceed the capacity of the memory bus, becoming the main bottleneck of computing performance.

📖
용어

Page Migration

Dynamic movement of memory pages between NUMA nodes based on access patterns to optimize data locality.

📖
용어

Memory-Aware Scheduling

Task scheduling that takes into account memory constraints and access patterns to minimize contentions and maximize parallelism.

📖
용어

Cache-Oblivious Algorithms

Algorithms designed to perform efficiently on any cache hierarchy without requiring specific cache size parameters.

📖
용어

Memory Hierarchy Optimization

Global strategy for data placement according to their access frequency and temporal criticality across the levels of the memory hierarchy.

📖
용어

Tensor Core Memory Layout

Specific organization of tensors in memory to maximize the efficiency of matrix operations on NVIDIA Tensor Cores.

📖
용어

Memory Access Divergence

Phenomenon where threads in a GPU warp access non-contiguous memory addresses, degrading performance through serialization of accesses.

📖
용어

HBM (High Bandwidth Memory) Integration

3D stacked memory architecture offering superior bandwidth for intensive AI workloads, with specific optimization of access patterns.

📖
용어

Memory-Mapped I/O Optimization

Technique allowing peripheral devices to directly access system memory, reducing copies and CPU overhead in AI pipelines.

🔍

결과를 찾을 수 없습니다