🏠 Home
Prestatietests
📊 Alle benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List applicaties 🎨 Creatieve vrije pagina's 🎯 FSACB - Ultieme showcase 🌍 Vertaalbenchmark
Modellen
🏆 Top 10 modellen 🆓 Gratis modellen 📋 Alle modellen ⚙️ Kilo Code
Bronnen
💬 Promptbibliotheek 📖 AI-woordenlijst 🔗 Nuttige links

AI-woordenlijst

Het complete woordenboek van kunstmatige intelligentie

162
categorieën
2.032
subcategorieën
23.060
termen
📖
termen

Optimal hyperplane

Decision boundary in a high-dimensional space that maximizes the distance between the closest classes, thus ensuring the best possible separation of data.

📖
termen

Support vector

Training points located on the margins that define the optimal hyperplane, these critical points determine the position and orientation of the decision boundary.

📖
termen

Maximum margin

Distance between the decision hyperplane and the closest training points of each class, which the SVM algorithm seeks to maximize to improve generalization.

📖
termen

Kernel function

Mathematical function that implicitly transforms data into a higher-dimensional space without performing the explicit transformation, allowing linear separation of non-linear data.

📖
termen

Linear SVM

Variant of SVM that uses a linear hyperplane to separate classes, particularly effective when data are linearly separable in their original space.

📖
termen

Non-linear SVM

Extension of SVM that uses kernel functions to project data into a higher-dimensional space where they become linearly separable.

📖
termen

Slack variable

Relaxation variables that allow some points to violate margin constraints, making the model more robust to noisy or non-separable data.

📖
termen

Hyperparameter C

Regularization parameter that controls the trade-off between margin maximization and classification error minimization, determining the penalty for margin violations.

📖
termen

One-Class SVM

Variant of SVMs used for anomaly detection where the algorithm learns a boundary around normal data to identify atypical observations.

📖
termen

SVR (Support Vector Regression)

Adaptation of SVMs for regression problems that seeks to find a function that deviates by at most an epsilon value from the targets while being as flat as possible.

📖
termen

Dual Formulation

Alternative mathematical representation of the SVM optimization problem that depends only on scalar products between observations, facilitating the use of kernel functions.

📖
termen

Feature Space

Transformed high-dimensional space where data can be linearly separated, obtained by applying the kernel function to the original data.

📖
termen

Multi-class SVM

Extension of binary SVMs to handle multi-class classification problems, typically implemented by one-against-one or one-against-all strategies.

📖
termen

RBF Kernel

Gaussian radial basis function kernel that maps data into an infinite-dimensional space, one of the most popular kernel functions for non-linear SVMs.

📖
termen

SMO (Sequential Minimal Optimization)

Efficient optimization algorithm to solve the dual problem of SVMs by iteratively optimizing Lagrange multipliers in pairs, reducing computational complexity.

📖
termen

Polynomial Kernel

Kernel function that computes the dot product of vectors in a polynomial feature space, allowing to capture higher-order non-linear relationships.

📖
termen

Soft margin

Extension of SVMs that allows certain margin constraint violations through slack variables, making the model more flexible to noisy or overlapping data.

📖
termen

Gamma (γ)

Hyperparameter of RBF and polynomial kernel functions that controls the influence of a single training example, determining the flexibility of the decision boundary.

🔍

Geen resultaten gevonden