🏠 Home
Benchmark
📊 Tutti i benchmark 🦖 Dinosauro v1 🦖 Dinosauro v2 ✅ App To-Do List 🎨 Pagine libere creative 🎯 FSACB - Ultimate Showcase 🌍 Benchmark traduzione
Modelli
🏆 Top 10 modelli 🆓 Modelli gratuiti 📋 Tutti i modelli ⚙️ Kilo Code
Risorse
💬 Libreria di prompt 📖 Glossario IA 🔗 Link utili

Glossario IA

Il dizionario completo dell'Intelligenza Artificiale

162
categorie
2.032
sottocategorie
23.060
termini
📖
termini

Robust scaling

Technique using quantiles to resist outliers, typically applying (x - median)/IQR where IQR represents the interquartile range. This approach maintains transformation stability even in the presence of noisy or extreme data.

📖
termini

L1 normalization

Scaling method dividing each value by the absolute sum of all values in the vector, ensuring the L1 norm equals 1. This transformation is particularly useful for probability-based models and sparse representations.

📖
termini

L2 normalization

Process normalizing vectors by dividing each component by the square root of the sum of squares, ensuring a unit Euclidean norm. This technique is essential for algorithms sensitive to vector magnitude like SVMs and neural networks.

📖
termini

Quantile normalization

Non-parametric technique transforming data to follow a specified uniform or normal distribution using quantile cumulative functions. This approach is particularly effective for handling strongly skewed or multimodal distributions.

📖
termini

Vector unit scaling

Normalization dividing each vector by its Euclidean norm, resulting in unit-length vectors in multidimensional space. This method is crucial for algorithms based on cosine similarity measures and text representations.

📖
termini

Decimal normalization

Simple technique dividing values by a power of 10 to bring them into the [-1,1] interval, based on the maximum number of digits before the decimal. This method preserves relative magnitude order while reducing absolute numerical scale.

📖
termini

Robust standardization

Standardization variant using median and median absolute deviation (MAD) as measures of central tendency and dispersion, offering increased resistance to outliers. This approach maintains interpretability while ensuring robustness.

📖
termini

Logarithmic scaling

Transformation applying log(x + c) where c is a constant to handle null values, effectively compressing the scale of large values. This method is particularly suited for data following a power law or exhibiting right skewness.

📖
termini

Rank Normalization

Non-parametric technique replacing each value with its normalized rank in the dataset, eliminating the influence of extreme values. This approach is robust to outliers and preserves only the relative order of observations.

📖
termini

Median Standardization

Method centering data around the median rather than the mean, dividing by a robust dispersion measure such as the interquartile range. This approach offers better resistance to skewed distributions and outliers.

📖
termini

Maximum Absolute Scaling

Simple technique dividing each value by the maximum absolute value of the feature, preserving signs and zeros while bounding values in [-1,1]. This method is particularly effective for already centered or sparse data.

📖
termini

Variance Normalization

Standardization process dividing variables by their variance, thus equalizing the importance of each feature in scale-sensitive algorithms. This approach is particularly useful for principal component analysis and ridge regression.

📖
termini

Coefficient of Variation Standardization

Advanced method normalizing data by dividing by the coefficient of variation (σ/μ), allowing comparison of variables with different means and variances. This technique is particularly relevant for data where relative variability is more important than absolute variability.

🔍

Nessun risultato trovato