🏠 Trang chủ
Benchmark
📊 Tất cả benchmark 🦖 Khủng long v1 🦖 Khủng long v2 ✅ Ứng dụng To-Do List 🎨 Trang tự do sáng tạo 🎯 FSACB - Trình diễn cuối cùng 🌍 Benchmark dịch thuật
Mô hình
🏆 Top 10 mô hình 🆓 Mô hình miễn phí 📋 Tất cả mô hình ⚙️ Kilo Code
Tài nguyên
💬 Thư viện prompt 📖 Thuật ngữ AI 🔗 Liên kết hữu ích

Thuật ngữ AI

Từ điển đầy đủ về Trí tuệ nhân tạo

162
danh mục
2.032
danh mục con
23.060
thuật ngữ
📖
thuật ngữ

Robust Accuracy

Metric evaluating a model's performance on adversarial examples generated within a certain perturbation bound, measuring its resistance to attacks. This metric combines classical accuracy with an evaluation under perturbation constraints to quantify performance degradation.

📖
thuật ngữ

Attack Distance

Quantitative measure of the minimum perturbation required for an adversarial attack to successfully fool a model, typically calculated according to different norms (L0, L1, L2, L∞). This metric allows comparison of relative robustness between different models against the same types of attacks.

📖
thuật ngữ

Robustness Score

Composite normalized index between 0 and 1 globally evaluating a model's resistance against a diverse set of adversarial attacks. This score aggregates multiple robustness metrics to provide a synthetic evaluation of model security.

📖
thuật ngữ

CLEVER Metric

Local robustness estimation score based on Lipschitz gradients, allowing evaluation of a lower bound on a model's resistance to attacks without requiring specific attacks. CLEVER (Cross Lipschitz Extreme Value for nEtwork Robustness) is particularly effective for evaluating the certifiable robustness of deep networks.

📖
thuật ngữ

AutoAttack Benchmark

Standardized automated evaluation suite combining multiple attacks (APGD-CE, APGD-T, FAB, Square) to provide a robust and reliable assessment of model resistance. AutoAttack dynamically adapts its parameters to maximize attack effectiveness and minimize gradient masking.

📖
thuật ngữ

Local Robustness Evaluation

Analysis of a model's resistance within a specific neighborhood around a given sample, determining whether the prediction remains constant for all perturbations in this region. This evaluation is crucial for understanding model behavior at the individual level rather than aggregated.

📖
thuật ngữ

Global Robustness Evaluation

Measure of a model's resistance across its entire input distribution, evaluating its average performance against attacks on a large sample of data. This approach provides a macroscopic view of model security under real-world usage conditions.

📖
thuật ngữ

Robustness Margin

Minimum distance between a model's decision boundary and an input sample, quantifying the safety margin before a prediction change occurs. This metric is fundamental for understanding the geometric stability of model decisions.

📖
thuật ngữ

Adversarial Security Score

Normalized indicator evaluating the level of protection of a model against different families of adversarial attacks, generally weighting the severity of attacks by their probability of occurrence. This score helps to objectively compare the relative security of different model architectures.

📖
thuật ngữ

Robustness Scale

Standardized classification system allowing models to be categorized according to their level of resistance to adversarial attacks, generally divided into several levels (low, medium, high, certified). This scale facilitates communication about model robustness between researchers and practitioners.

📖
thuật ngữ

Vulnerability Index

Quantitative metric measuring the sensitivity of a model to adversarial attacks, calculated as the ratio between degraded performance under attack and nominal performance. A high index indicates great vulnerability while a low index suggests better resistance.

📖
thuật ngữ

Attack Success Rate

Percentage of samples for which an adversarial attack succeeds in changing the model's prediction, directly measuring the effectiveness of attacks against a given model. This metric is complementary to robust accuracy for completely evaluating model security.

📖
thuật ngữ

Maximum Admissible Perturbation

Maximum perturbation threshold that a model can tolerate without prediction change, serving as a reference for evaluating robustness under controlled conditions. This measure is essential for defining the operational security constraints of the model.

📖
thuật ngữ

Empirical Robustness Evaluation

Evaluation methodology based on the generation of specific adversarial attacks to test a model's resistance, providing practical measures but without formal guarantees. This approach is widely used because it reflects real attack scenarios.

📖
thuật ngữ

RobustBench Benchmark

Standardized reference platform for evaluating the robustness of image classification models, providing strict evaluation protocols and comparative rankings. RobustBench maintains a list of certified robust models and evaluation metrics recognized by the community.

📖
thuật ngữ

Lp Distance Metric

Mathematical norm used to quantify the amplitude of adversarial perturbations, where p can take different values (0, 1, 2, ∞) to measure different types of modifications. The choice of Lp norm significantly influences robustness evaluation according to the nature of the perturbations considered.

📖
thuật ngữ

Formal Robustness Verification

Mathematically rigorous approach to verify a model's robustness by proving guarantees for all possible perturbations within a specified domain. Unlike empirical methods, formal verification provides absolute certainty but is often computationally expensive.

🔍

Không tìm thấy kết quả