🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Low-rank matrices

Mathematical representation where a matrix is expressed as the product of two smaller matrices with reduced rank. This decomposition reduces the number of required parameters while capturing the essential information of transformations.

📖
용어

Memory efficiency

Optimization of RAM and VRAM usage during AI model training and inference. Techniques like LoRA drastically reduce memory consumption by limiting the modified parameters.

📖
용어

Trainable parameters

Subset of neural network weights that are actually modified during the learning process. In LoRA, only a small percentage (typically 0.1-1%) of total parameters are trainable.

📖
용어

Rank decomposition

Algebraic technique factoring a weight matrix W into W + BA where B and A are low-rank matrices. This decomposition forms the mathematical foundation of LoRA adaptation.

📖
용어

Efficient fine-tuning

Paradigm of adapting pre-trained models aiming to minimize computational and memory resources required. Methods like LoRA, Adapters or Prefix-tuning allow model specialization without modifying all their parameters.

📖
용어

PEFT (Parameter-Efficient Fine-Tuning)

Category of model adaptation techniques aiming to modify a minimum of parameters during fine-tuning. LoRA is one of the most popular PEFT approaches along with Adapters, Prefix-tuning and soft prompts.

📖
용어

Alpha scaling factor

Crucial hyperparameter in LoRA controlling the amplitude of adaptation applied to original weights. This scaling factor adjusts the relative influence of low-rank matrices compared to pre-trained weights.

📖
용어

Multi-LoRA

Architecture allowing simultaneous application of multiple specialized LoRA adaptations to the same base model. This approach facilitates rapid switching between different tasks or domains of expertise without full model reloading.

📖
용어

Zero-shot adaptation

Ability of a model adapted with LoRA to generalize to tasks or domains not seen during adaptation training. This property emerges from preserving the base model's general knowledge while adding targeted specializations.

📖
용어

LoRA rank hyperparameter

Parameter determining the dimension of the low-rank matrices in LoRA decomposition, controlling the trade-off between expressiveness and efficiency. Typical ranks range from 4 to 64 depending on the complexity of the adaptation task.

📖
용어

Weight merging

Process of integrating LoRA adaptations into the base model weights to eliminate computational overhead during inference. This merging allows recovering a standard model with the same performance as the adapted version.

🔍

결과를 찾을 수 없습니다