Thuật ngữ AI
Từ điển đầy đủ về Trí tuệ nhân tạo
Covariance Function
Kernel function that defines the correlation between two input points in a Gaussian process, determining the regularity and structure of the modeled function.
Matérn Kernel
Family of covariance functions parameterized by a roughness factor ν, offering fine control over the differentiability of the modeled Gaussian process.
RBF (Gaussian) Kernel
Infinitely differentiable radial basis function covariance, assuming very smooth functions and widely used for standard Gaussian processes.
Kernel Hyperparameters
Parameters of the covariance function (such as length scale and variance) that control the behavior of the Gaussian process and are optimized by maximum likelihood.
Length Scale
Kernel hyperparameter determining the distance over which input points are correlated, controlling the variability of the function modeled by the Gaussian process.
Signal Variance
Kernel hyperparameter representing the vertical standard deviation of the modeled function, controlling the average amplitude of fluctuations in the Gaussian process.
Observational Noise
Parameter σ² modeling the uncertainty of observations, added to the diagonal of the covariance matrix to handle noisy data in Gaussian processes.
Conditional Distribution Prediction
Calculation of the posterior distribution of the Gaussian process at a new point, conditioned on existing observations to provide predictive mean and variance.
Maximum Likelihood Evidence Maximization
Procedure for optimizing Gaussian process hyperparameters by maximizing the marginal log-likelihood of the observed data under the model.
Karhunen-Loève Theorem
Decomposition of a Gaussian process into a series of orthogonal functions with independent Gaussian coefficients, enabling a compact representation of the process.
Dot-Product Kernel
Covariance function k(x,x') = σ² + xᵀx' used to model linear or polynomial functions in Gaussian processes.
Deep Gaussian Process
Extension of Gaussian processes where the covariance function is itself parameterized by a neural network, allowing for complex non-stationary models.
Sparse Gaussian Process
Computational approximation using inducing points to reduce the cubic complexity O(n³) of standard Gaussian processes for large datasets.
Cholesky Decomposition
Factorization of the covariance matrix K = LLᵀ used to efficiently solve linear systems and compute the log-likelihood in Gaussian processes.