🏠 Home
Benchmark Hub
📊 All Benchmarks 🦖 Dinosaur v1 🦖 Dinosaur v2 ✅ To-Do List Applications 🎨 Creative Free Pages 🎯 FSACB - Ultimate Showcase 🌍 Translation Benchmark
Models
🏆 Top 10 Models 🆓 Free Models 📋 All Models ⚙️ Kilo Code
Resources
💬 Prompts Library 📖 AI Glossary 🔗 Useful Links

AI Glossary

The complete dictionary of Artificial Intelligence

162
categories
2,032
subcategories
23,060
terms
📖
terms

Faithfulness Score

Metric evaluating the degree of faithfulness of the generated response to the provided context, measuring whether the statements are factually supported by the retrieved sources.

📖
terms

Context Relevance

Indicator quantifying the relevance of retrieved documents or passages to the initial query, essential for evaluating the quality of the RAG retrieval component.

📖
terms

Answer Relevance

Score measuring the extent to which the generated response directly and completely answers the posed question, regardless of factual accuracy.

📖
terms

Retrieval Precision

Proportion of relevant documents among all retrieved documents, evaluating the system's effectiveness in returning only useful information.

📖
terms

Retrieval Recall

Ratio of relevant documents retrieved compared to the total number of relevant documents available in the knowledge base.

📖
terms

Knowledge F1 Score

Harmonic mean between precision and recall of retrieved knowledge, providing a balanced measure of the overall RAG system performance.

📖
terms

Context Utilization Rate

Percentage of relevant information from the retrieved context that is actually used in the final response, measuring the efficiency of source utilization.

📖
terms

Hallucination Rate

Frequency at which the model generates information not supported by the provided context, a critical indicator of RAG system reliability.

📖
terms

Semantic Similarity Score

Semantic similarity measure between the generated response and a reference response, using embeddings to capture meaning nuances.

📖
terms

Answer Completeness

Evaluation of the coverage of all relevant aspects of the question in the generated response, ensuring a comprehensive answer.

📖
terms

Retrieval Latency

Time required to retrieve relevant documents from the knowledge base, a crucial criterion for user experience in production.

📖
terms

Token Efficiency Ratio

Ratio between the number of relevant tokens used and the total number of tokens generated, measuring the economic efficiency of the RAG system.

📖
terms

Groundedness Score

Metric assessing the extent to which each statement in the response is supported by explicit evidence in the retrieved sources.

📖
terms

Source Attribution Accuracy

Accuracy with which the system correctly attributes each part of the response to its appropriate documentary source in the retrieved context.

📖
terms

Response Consistency

Measure of the internal consistency of the generated response, assessing the absence of contradictions between different parts of the response.

📖
terms

Query Ambiguity Resolution

Ability of the RAG system to interpret and resolve ambiguities in the user query to retrieve the most relevant information.

📖
terms

Information Overlap Score

Measure of the overlap between information present in the response and that available in the retrieved context, avoiding redundancies.

📖
terms

Answer Accuracy

Evaluation of the factual truthfulness of the generated response compared to a ground truth or validated reference sources.

📖
terms

Retrieval Coverage

Extent of the knowledge base actually accessible by the retrieval system, impacting the ability to answer diverse questions.

📖
terms

Response Coherence

Quality of the logical structure and narrative flow of the generated response, ensuring clear and understandable presentation of information.

🔍

No results found