🏠 Hem
Benchmarkar
📊 Alla benchmarkar 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List-applikationer 🎨 Kreativa fria sidor 🎯 FSACB - Ultimata uppvisningen 🌍 Översättningsbenchmark
Modeller
🏆 Topp 10 modeller 🆓 Gratis modeller 📋 Alla modeller ⚙️ Kilo Code
Resurser
💬 Promptbibliotek 📖 AI-ordlista 🔗 Användbara länkar

AI-ordlista

Den kompletta ordlistan över AI

162
kategorier
2 032
underkategorier
23 060
termer
📖
termer

Low-rank matrices

Mathematical representation where a matrix is expressed as the product of two smaller matrices with reduced rank. This decomposition reduces the number of required parameters while capturing the essential information of transformations.

📖
termer

Memory efficiency

Optimization of RAM and VRAM usage during AI model training and inference. Techniques like LoRA drastically reduce memory consumption by limiting the modified parameters.

📖
termer

Trainable parameters

Subset of neural network weights that are actually modified during the learning process. In LoRA, only a small percentage (typically 0.1-1%) of total parameters are trainable.

📖
termer

Rank decomposition

Algebraic technique factoring a weight matrix W into W + BA where B and A are low-rank matrices. This decomposition forms the mathematical foundation of LoRA adaptation.

📖
termer

Efficient fine-tuning

Paradigm of adapting pre-trained models aiming to minimize computational and memory resources required. Methods like LoRA, Adapters or Prefix-tuning allow model specialization without modifying all their parameters.

📖
termer

PEFT (Parameter-Efficient Fine-Tuning)

Category of model adaptation techniques aiming to modify a minimum of parameters during fine-tuning. LoRA is one of the most popular PEFT approaches along with Adapters, Prefix-tuning and soft prompts.

📖
termer

Alpha scaling factor

Crucial hyperparameter in LoRA controlling the amplitude of adaptation applied to original weights. This scaling factor adjusts the relative influence of low-rank matrices compared to pre-trained weights.

📖
termer

Multi-LoRA

Architecture allowing simultaneous application of multiple specialized LoRA adaptations to the same base model. This approach facilitates rapid switching between different tasks or domains of expertise without full model reloading.

📖
termer

Zero-shot adaptation

Ability of a model adapted with LoRA to generalize to tasks or domains not seen during adaptation training. This property emerges from preserving the base model's general knowledge while adding targeted specializations.

📖
termer

LoRA rank hyperparameter

Parameter determining the dimension of the low-rank matrices in LoRA decomposition, controlling the trade-off between expressiveness and efficiency. Typical ranks range from 4 to 64 depending on the complexity of the adaptation task.

📖
termer

Weight merging

Process of integrating LoRA adaptations into the base model weights to eliminate computational overhead during inference. This merging allows recovering a standard model with the same performance as the adapted version.

🔍

Inga resultat hittades