🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

TVM (Tensor Virtual Machine)

An open-source compilation framework designed to optimize and execute tensors across various hardware architectures, lowering the abstraction level of deep learning models.

📖
용어

Just-In-Time (JIT) Compilation

A compilation technique that translates bytecode or intermediate code into native machine code at runtime, enabling optimizations based on the actual system state.

📖
용어

Ahead-of-Time (AOT) Compilation

The process of compiling source code into native machine code before execution, reducing startup latency and enabling aggressive optimizations independent of the runtime environment.

📖
용어

Graph IR (Intermediate Representation)

An abstract representation of an AI model's computation graph, used by compilers to analyze dependencies and apply optimization transformations before code generation.

📖
용어

Operator Fusion

An optimization technique that combines multiple elementary operations from the computation graph into a single computation kernel, reducing memory overhead and improving data locality.

📖
용어

Auto-scheduling

An automated process of searching for the best execution configuration (tiling, vectorization, parallelization) for a computation kernel on a given target hardware architecture.

📖
용어

Target-specific Optimization

A set of compilation techniques that adapt the generated code to the unique characteristics of a hardware architecture (CPU, GPU, TPU, ASIC) to maximize performance.

📖
용어

Relay IR

A high-level functional intermediate representation in TVM, supporting computation graphs with control flow and enabling complex semantic optimizations.

📖
용어

Tensor Expression (TE)

Domain-specific language in TVM for describing tensor computations at a high level of abstraction, facilitating automatic generation of optimized code for various targets.

📖
용어

Kernel Auto-tuning

Process of systematically exploring the optimization parameter space of a computational kernel to identify the configuration offering the best performance on specific hardware.

📖
용어

HLO (High-Level Optimizer) IR

Intermediate representation used by XLA, describing computations as high-level tensor operations, optimized before code generation for accelerators.

📖
용어

Codegen (Code Generation)

Final phase of compilation where the optimized intermediate representation is translated into executable machine code for the specific target architecture.

📖
용어

Polyhedral Model

Mathematical model used to represent and transform nested loops, enabling complex optimizations like tiling and automatic parallelization.

📖
용어

LLVM (Low Level Virtual Machine)

Modular compilation infrastructure used by many AI compilers to generate optimized machine code for different CPU architectures.

📖
용어

Memory Layout Optimization

Technique of reorganizing data in memory to improve spatial and temporal locality, reducing access latencies and increasing computational throughput.

📖
용어

Hardware Abstraction Layer (HAL)

Software interface that hides the specific details of the underlying hardware, allowing compilers to generate portable code while leveraging native optimizations.

📖
용어

Vectorization

Optimization technique that transforms scalar operations into vector operations (SIMD), leveraging the parallel computing units of modern processors.

📖
용어

Tiling

Data partitioning strategy into blocks (tiles) to improve cache reuse and parallelization efficiency in tensor computations.

📖
용어

Graph Rewriting

Systematic transformation of the computation graph by applying rewriting rules to replace subgraphs with more efficient equivalents.

🔍

결과를 찾을 수 없습니다