🏠 홈
벤치마크
📊 모든 벤치마크 🦖 공룡 v1 🦖 공룡 v2 ✅ 할 일 목록 앱 🎨 창의적인 자유 페이지 🎯 FSACB - 궁극의 쇼케이스 🌍 번역 벤치마크
모델
🏆 톱 10 모델 🆓 무료 모델 📋 모든 모델 ⚙️ 킬로 코드 모드
리소스
💬 프롬프트 라이브러리 📖 AI 용어 사전 🔗 유용한 링크

AI 용어집

인공지능 완전 사전

162
카테고리
2,032
하위 카테고리
23,060
용어
📖
용어

Temporal Salience

Technique that identifies the most influential moments or time intervals in a time series by perturbing input data and measuring the impact on the model's output.

📖
용어

Temporal Guided Backpropagation

Interpretability method that adapts guided backpropagation to recurrent neural networks to visualize the temporal features that most activate neurons.

📖
용어

LIME for Time Series

Adaptation of the LIME (Local Interpretable Model-agnostic Explanations) algorithm that generates local explanations by creating perturbed segments of the time series to form a simple interpretable model.

📖
용어

Temporal SHAP

Extension of SHapley Additive exPlanations values to sequential data, attributing a contribution to each time step or each feature at each instant to explain the overall prediction.

📖
용어

Temporal Interval Masking

Interpretability approach that masks or replaces entire segments of the time series to assess their collective importance in the model's decision, unlike salience which focuses on individual points.

📖
용어

Temporal Evolution Rules

Method that extracts logical rules describing how states or patterns evolve over time to lead to a specific prediction, making the model's sequential reasoning explicit.

📖
용어

Latent Space Trajectory Analysis

Technique that visualizes and interprets the path taken by a data sequence in the latent space of a model (such as an autoencoder) to understand its dynamics and classification.

📖
용어

Temporal Relevance via Wavelet Decomposition

Method that decomposes the time series into different frequencies and time scales via wavelets, then evaluates the relevance of each component for the model's prediction.

📖
용어

Temporal Counterfactual Explanations

Generation of minimally modified time series that change the model's prediction, enabling understanding of critical temporal conditions that would have led to a different outcome.

📖
용어

Dynamic Heat Map

Interpretability visualization that displays the importance of features (or pixels in a video) in an evolving manner over time, showing how the model's focus changes.

📖
용어

Interpretability by Aggregation of Temporal Features

Approach that explains predictions based on aggregated temporal features (moving average, variance, etc.) rather than raw data points, offering a more macroscopic view.

📖
용어

Time Step Influence Decomposition

Method that isolates the contribution of each individual time step to the final prediction, often using denoising techniques or analyzing gradients through recurrent steps.

📖
용어

Long-Term Dependency Importance Analysis

Set of techniques aimed at quantifying and visualizing how distant events in the past influence the current prediction, a key challenge for models like LSTM or Transformers.

📖
용어

Temporal Causal Explanations

Methodology that goes beyond correlation to identify cause-effect relationships in sequential data that are exploited by the model, using causal models like temporal causal acyclic graphs.

🔍

결과를 찾을 수 없습니다